Sample records for maximum likelihood mml

  1. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  2. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  3. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  4. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  5. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  6. Eddington's demon: inferring galaxy mass functions and other distributions from uncertain data

    NASA Astrophysics Data System (ADS)

    Obreschkow, D.; Murray, S. G.; Robotham, A. S. G.; Westmeier, T.

    2018-03-01

    We present a general modified maximum likelihood (MML) method for inferring generative distribution functions from uncertain and biased data. The MML estimator is identical to, but easier and many orders of magnitude faster to compute than the solution of the exact Bayesian hierarchical modelling of all measurement errors. As a key application, this method can accurately recover the mass function (MF) of galaxies, while simultaneously dealing with observational uncertainties (Eddington bias), complex selection functions and unknown cosmic large-scale structure. The MML method is free of binning and natively accounts for small number statistics and non-detections. Its fast implementation in the R-package dftools is equally applicable to other objects, such as haloes, groups, and clusters, as well as observables other than mass. The formalism readily extends to multidimensional distribution functions, e.g. a Choloniewski function for the galaxy mass-angular momentum distribution, also handled by dftools. The code provides uncertainties and covariances for the fitted model parameters and approximate Bayesian evidences. We use numerous mock surveys to illustrate and test the MML method, as well as to emphasize the necessity of accounting for observational uncertainties in MFs of modern galaxy surveys.

  7. Decision-Tree Program

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1994-01-01

    IND computer program introduces Bayesian and Markov/maximum-likelihood (MML) methods and more-sophisticated methods of searching in growing trees. Produces more-accurate class-probability estimates important in applications like diagnosis. Provides range of features and styles with convenience for casual user, fine-tuning for advanced user or for those interested in research. Consists of four basic kinds of routines: data-manipulation, tree-generation, tree-testing, and tree-display. Written in C language.

  8. Smoking, vaping, eating: Is legalization impacting the way people use cannabis?

    PubMed

    Borodovsky, Jacob T; Crosier, Benjamin S; Lee, Dustin C; Sargent, James D; Budney, Alan J

    2016-10-01

    In the context of the shifting legal landscape of medical cannabis, different methods of cannabis administration have important public health implications. How medical marijuana laws (MML) may influence patterns of use of alternative methods of cannabis administration (vaping and edibles) compared to traditional methods (smoking) is unclear. The purpose of this study was to determine if the prevalence of use of alternative methods of cannabis administration varied in relation to the presence of and variation in MMLs among states in the United States. Using Qualtrics and Facebook, we collected survey data from a convenience sample of n=2838 individuals who had used cannabis at least once in their lifetime. Using multiple sources, U.S. states were coded by MML status, duration of MML status, and cannabis dispensary density. Adjusted logistic and linear regression analyses were used to analyze outcomes of ever use, preference for, and age of initiation of smoking, vaping, and edibles in relation to MML status, duration of MML status, and cannabis dispensary density. Individuals in MML states had a significantly higher likelihood of ever use of vaping (OR: 2.04, 99% CI: 1.62-2.58) and edibles (OR: 1.78, 99% CI: 1.39-2.26) than those in states without MMLs. Longer duration of MML status and higher dispensary density were also significantly associated with ever use of vaping and edibles. MMLs are related to state-level patterns of utilization of alternative methods of cannabis administration. Whether discrepancies in MML legislation are causally related to these findings will require further study. If MMLs do impact methods of use, regulatory bodies considering medical or recreational legalization should be aware of the potential impact this may have on cannabis users. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Smoking, Vaping, Eating: Is Legalization Impacting the Way People Use Cannabis?

    PubMed Central

    Borodovsky, Jacob T.; Crosier, Benjamin S.; Lee, Dustin C.; Sargent, James D.; Budney, Alan J.

    2016-01-01

    Background In the context of the shifting legal landscape of medical marijuana, different methods of cannabis administration have important public health implications. How medical marijuana laws (MML) may influence patterns of use of alternative methods of cannabis administration (vaping and edibles) compared to traditional methods (smoking) is unclear. The purpose of this study was to determine if the prevalence of use of alternative methods of cannabis administration varied in relation to the presence of and variation in MMLs among states in the United States. Method Using Qualtrics and Facebook, we collected survey data from a convenience sample of n=2838 individuals who had used cannabis at least once in their lifetime. Using multiple sources, U.S. states were coded by MML status, duration of MML status, and cannabis dispensary density. Adjusted logistic and linear regression analyses were used to analyze outcomes of ever use, preference for, and age of initiation of smoking, vaping, and edibles in relation to MML status, duration of MML status, and cannabis dispensary density. Results Individuals in MML states had a significantly higher likelihood of ever use of vaping (OR: 2.04, 99% CI: 1.62-2.58) and edibles (OR: 1.78, 99% CI: 1.39-2.26) than those in states without MMLs. Longer duration of MML status and higher dispensary density were also significantly associated with ever use of vaping and edibles. Conclusions MMLs are related to state-level patterns of utilization of alternative methods of cannabis administration. Whether discrepancies in MML legislation are causally related to these findings will require further study. If MMLs do impact methods of use, regulatory bodies considering medical or recreational legalization should be aware of the potential impact this may have on cannabis users. PMID:26992484

  10. Search for WIMP inelastic scattering off xenon nuclei with XENON100

    NASA Astrophysics Data System (ADS)

    Aprile, E.; Aalbers, J.; Agostini, F.; Alfonsi, M.; Amaro, F. D.; Anthony, M.; Arneodo, F.; Barrow, P.; Baudis, L.; Bauermeister, B.; Benabderrahmane, M. L.; Berger, T.; Breur, P. A.; Brown, A.; Brown, E.; Bruenner, S.; Bruno, G.; Budnik, R.; Bütikofer, L.; Calvén, J.; Cardoso, J. M. R.; Cervantes, M.; Cichon, D.; Coderre, D.; Colijn, A. P.; Conrad, J.; Cussonneau, J. P.; Decowski, M. P.; de Perio, P.; di Gangi, P.; di Giovanni, A.; Diglio, S.; Eurin, G.; Fei, J.; Ferella, A. D.; Fieguth, A.; Fulgione, W.; Gallo Rosso, A.; Galloway, M.; Gao, F.; Garbini, M.; Geis, C.; Goetzke, L. W.; Greene, Z.; Grignon, C.; Hasterok, C.; Hogenbirk, E.; Itay, R.; Kaminsky, B.; Kazama, S.; Kessler, G.; Kish, A.; Landsman, H.; Lang, R. F.; Lellouch, D.; Levinson, L.; Lin, Q.; Lindemann, S.; Lindner, M.; Lombardi, F.; Lopes, J. A. M.; Manfredini, A.; Maris, I.; Marrodán Undagoitia, T.; Masbou, J.; Massoli, F. V.; Masson, D.; Mayani, D.; Messina, M.; Micheneau, K.; Molinario, A.; Mora, K.; Murra, M.; Naganoma, J.; Ni, K.; Oberlack, U.; Pakarha, P.; Pelssers, B.; Persiani, R.; Piastra, F.; Pienaar, J.; Pizzella, V.; Piro, M.-C.; Plante, G.; Priel, N.; Rauch, L.; Reichard, S.; Reuter, C.; Rizzo, A.; Rosendahl, S.; Rupp, N.; Dos Santos, J. M. F.; Sartorelli, G.; Scheibelhut, M.; Schindler, S.; Schreiner, J.; Schumann, M.; Scotto Lavina, L.; Selvi, M.; Shagin, P.; Silva, M.; Simgen, H.; Sivers, M. V.; Stein, A.; Thers, D.; Tiseni, A.; Trinchero, G.; Tunnell, C.; Vargas, M.; Wang, H.; Wang, Z.; Wei, Y.; Weinheimer, C.; Wulf, J.; Ye, J.; Zhang, Y.; Xenon Collaboration

    2017-07-01

    We present the first constraints on the spin-dependent, inelastic scattering cross section of weakly interacting massive particles (WIMPs) on nucleons from XENON100 data with an exposure of 7.64 ×103 kg .days . XENON100 is a dual-phase xenon time projection chamber with 62 kg of active mass, operated at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy and designed to search for nuclear recoils from WIMP-nucleus interactions. Here we explore inelastic scattering, where a transition to a low-lying excited nuclear state of Xe 129 is induced. The experimental signature is a nuclear recoil observed together with the prompt deexcitation photon. We see no evidence for such inelastic WIMP-Xe 129 interactions. A profile likelihood analysis allows us to set a 90% C.L. upper limit on the inelastic, spin-dependent WIMP-nucleon cross section of 3.3 ×10-38 cm2 at 100 GeV /c2 . This is the most constraining result to date, and sets the pathway for an analysis of this interaction channel in upcoming, larger dual-phase xenon detectors.

  11. Entanglement spectra of superconductivity ground states on the honeycomb lattice

    NASA Astrophysics Data System (ADS)

    Predin, Sonja; Schliemann, John

    2017-12-01

    We analytically evaluate the entanglement spectra of the superconductivity states in graphene, primarily focusing on the s-wave and chiral dx2-y2 + idxy dx2-y2+idxy-> superconductivity states. We demonstrate that the topology of the entanglement Hamiltonian can differ from that of the subsystem Hamiltonian. In particular, the topological properties of the entanglement Hamiltonian of the chiral dx2-y2 + idxy dx2-y2+idxy-> superconductivity state obtained by tracing out one spin direction clearly differ from those of the time-reversal invariant Hamiltonian of noninteracting fermions on the honeycomb lattice.

  12. Atom-atom interactions around the band edge of a photonic crystal waveguide

    NASA Astrophysics Data System (ADS)

    Hood, Jonathan D.; Goban, Akihisa; Asenjo-Garcia, Ana; Lu, Mingwu; Yu, Su-Peng; Chang, Darrick E.; Kimble, H. J.

    2016-09-01

    Tailoring the interactions between quantum emitters and single photons constitutes one of the cornerstones of quantum optics. Coupling a quantum emitter to the band edge of a photonic crystal waveguide (PCW) provides a unique platform for tuning these interactions. In particular, the cross-over from propagating fields E(x)e±ikxxE(x)∝e±ikxx outside the bandgap to localized fields E(x)e-κx|x|E(x)∝e-κx|x| within the bandgap should be accompanied by a transition from largely dissipative atom-atom interactions to a regime where dispersive atom-atom interactions are dominant. Here, we experimentally observe this transition by shifting the band edge frequency of the PCW relative to the D1D1 line of atomic cesium for N¯=3.0±0.5N¯=3.0±0.5 atoms trapped along the PCW. Our results are the initial demonstration of this paradigm for coherent atom-atom interactions with low dissipation into the guided mode.

  13. Growth Texture and Mechanism of Zinc Nanowires Produced by Mechanical Elongation of Nanocontacts.

    PubMed

    Yamabe, Kammu; Kizuka, Tokushi

    2018-01-01

    Two zinc nanotips were brought into contact and elongated inside a transmission electron microscope, thereby growing single-crystal nanowires. The growth dynamics was observed in situ via a lattice imaging method. The preferential crystal growth directions were identified as [101-0], [112-0], [101-2-], and [0001]. Of these, the nanowires grown along the [101-0] and [112-0] directions accounted for 75% of the total and were surrounded by low-energy side surfaces, i.e., {0001}, {101-1}, and {101-0}. On the basis of these features, models of the nanowire morphology were proposed. In either growth direction, the tensile force aligned parallel to the direction along which slip events corresponding to the predominant slip system were unlikely to occur. This led to a high tensile stress for extracting atoms from the growth region, i.e., the promotion of nanowire growth.

  14. Higher-than-ballistic conduction of viscous electron flows

    NASA Astrophysics Data System (ADS)

    Guo, Haoyu; Ilseven, Ekin; Falkovich, Gregory; Levitov, Leonid S.

    2017-03-01

    Strongly interacting electrons can move in a neatly coordinated way, reminiscent of the movement of viscous fluids. Here, we show that in viscous flows, interactions facilitate transport, allowing conductance to exceed the fundamental Landauer’s ballistic limit GballGball. The effect is particularly striking for the flow through a viscous point contact, a constriction exhibiting the quantum mechanical ballistic transport at T=0T=0 but governed by electron hydrodynamics at elevated temperatures. We develop a theory of the ballistic-to-viscous crossover using an approach based on quasi-hydrodynamic variables. Conductance is found to obey an additive relation G=Gball+GvisG=Gball+Gvis, where the viscous contribution GvisGvis dominates over GballGball in the hydrodynamic limit. The superballistic, low-dissipation transport is a generic feature of viscous electronics.

  15. Anomalous thermal diffusivity in underdoped YBa2Cu3O6+x

    NASA Astrophysics Data System (ADS)

    Zhang, Jiecheng; Levenson-Falk, Eli M.; Ramshaw, B. J.; Bonn, D. A.; Liang, Ruixing; Hardy, W. N.; Hartnoll, Sean A.; Kapitulnik, Aharon

    2017-05-01

    The thermal diffusivity in the abab plane of underdoped YBCO crystals is measured by means of a local optical technique in the temperature range of 25-300 K. The phase delay between a point heat source and a set of detection points around it allows for high-resolution measurement of the thermal diffusivity and its in-plane anisotropy. Although the magnitude of the diffusivity may suggest that it originates from phonons, its anisotropy is comparable with reported values of the electrical resistivity anisotropy. Furthermore, the anisotropy drops sharply below the charge order transition, again similar to the electrical resistivity anisotropy. Both of these observations suggest that the thermal diffusivity has pronounced electronic as well as phononic character. At the same time, the small electrical and thermal conductivities at high temperatures imply that neither well-defined electron nor phonon quasiparticles are present in this material. We interpret our results through a strongly interacting incoherent electron-phonon “soup” picture characterized by a diffusion constant D˜vB2τD˜vB2τ, where vBvB is the soup velocity, and scattering of both electrons and phonons saturates a quantum thermal relaxation time τ˜/kBTτ˜ℏ/kBT.

  16. Transport in thin polarized Fermi-liquid films

    NASA Astrophysics Data System (ADS)

    Li, David Z.; Anderson, R. H.; Miller, M. D.

    2015-10-01

    We calculate expressions for the state-dependent quasiparticle lifetime τσ, the thermal conductivity κ , the shear viscosity η , and discuss the spin diffusion coefficient D for Fermi-liquid films in two dimensions. The expressions are valid for low temperatures and arbitrary polarization. In two dimensions, as in three dimensions, the integrals over the transition rates factor into energy and angular parts. However, the angular integrations contain a weak divergence. This problem is addressed using the method of K. Miyake and W. J. Mullin [Phys. Rev. Lett. 50, 197 (1983), 10.1103/PhysRevLett.50.197; J. Low Temp. Phys. 56, 499 (1984), 10.1007/BF00681808]. The low-temperature expressions for the transport coefficients are essentially exact. We find that κ-1˜T lnT , and η-1˜T2 for arbitrary polarizations 0 ≤P ≤1 . These results are in agreement with earlier zero-polarization results of H. H. Fu and C. Ebner [Phys. Rev. A 10, 338 (1974)., 10.1103/PhysRevA.10.338], but differ from the temperature dependence of the shear viscosity found by D. S. Novikov (arXiv:cond-mat/0603184). They also differ from the discontinuous change of temperature dependence in D from zero to nonzero polarization that was discovered by Miyake and Mullin. We note that in two dimensions the shear viscosity requires a unique analysis. We obtain predictions for the density, temperature, and polarization dependence of κ ,η , and D for second-layer 3He films on graphite, and thin 3He-4He superfluid mixtures. For 3He on graphite, we find roughly an order of magnitude increase in magnitude for κ and η as the polarization is increased from 0 to 1. For D a similar large increase is predicted from zero polarization to the polarization where D is a maximum (˜0.74 ). We discuss the applicability of 3He thin films to the question of the existence of a universal lower bound for the ratio of the shear viscosity to the entropy density.

  17. What are we learning from the relative orientation between density structures and the magnetic field in molecular clouds?

    NASA Astrophysics Data System (ADS)

    Soler, J. D.; Hennebelle, P.

    2017-10-01

    >Bˆ-> being mostly parallel at low NH to mostly perpendicular at the highest NH, is related to the magnetic field strength and constitutes a crucial piece of information for determining the role of the magnetic field in the dynamics of MCs.

  18. Revised thermonuclear rate of 7Be(n ,α ) 4He relevant to Big-Bang nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Hou, S. Q.; He, J. J.; Kubono, S.; Chen, Y. S.

    2015-05-01

    In the standard Big-Bang nucleosynthesis (BBN) model, the primordial 7Li abundance is overestimated by about a factor of 2 to 3 compared to astronomical observations; this is the so-called pending cosmological lithium problem. The 7Be(n ,α )4He reaction was regarded as the secondary important reaction in affecting the 7Li abundance by destructing the 7Be nucleus in BBN. However, the reaction rate of 7Be(n ,α )4He has not been well studied so far. This reaction rate was first estimated by Wagoner in 1969, which has been primarily adopted in the current BBN simulations. This simple estimate involved only a direct reaction contribution, but the resonant component should also be considered according to the later experimental results. In the present work, we revised this rate based on the indirect cross-section data available for the 4He(α ,n )7Be and 4He(α ,p )7Li reactions by applying the charge symmetry and the principle of detailed balance. Our new result shows that the previous rate (acting as an upper limit) is overestimated by about a factor of ten. The BBN simulation shows that the present rate leads to a 1.2% increase in the final 7Li abundance compared with the result using the Wagoner rate and, hence, the present rate even worsens the 7Li problem. By the present estimation, the role of 7Be(n ,α )4He in destroying 7Be is weakened from the second most importance to the third and, in turn, the 7Be(d ,p )2 4He reaction becomes of secondary importance in destroying 7Be.

  19. Formation and decay analysis of Cd *48 98 ,104 isotopes in 40Ca20-induced reactions

    NASA Astrophysics Data System (ADS)

    Gautam, Manjeet Singh; Kaur, Amandeep; Sharma, Manoj K.

    2015-11-01

    We have analyzed the fusion dynamics of Ca4020+Ni 28 58 ,64 reactions by using the energy dependent Woods-Saxon potential (EDWSP) model and coupled channel model and subsequently the decay patterns of Cd *48 98 ,104 nuclei are governed via the dynamical cluster-decay model (DCM). The influence of intrinsic degrees of freedom of colliding pairs, such as low lying surface vibrations and neutron transfer channels, are entertained within the context of coupled channel calculations. Interestingly, the energy dependence in the Woods-Saxon potential induces barrier modification effects (barrier height, barrier position, barrier curvature) in a somewhat similar way as that for coupled channel approach, hence adequately explains the observed fusion dynamics of Ca4020+Ni 28 58 ,64 reactions. In addition, the decay analysis of compound nuclei formed in the fusion of Ca4020+Ni 28 58 ,64 reactions is investigated by using DCM. The calculations are done for quadrupole (β2) deformed fragments having optimum orientations for hot configurations. The experimental data for evaporation residues lying within the wide range of center of mass energy (Ec .m .) of 64 -88 MeV is nicely addressed using the collective clusterization approach of the DCM. The comparative analysis of decay profiles of Cd *48 98 ,104 is worked out by introducing angular momentum and temperature effects in the fragmentation potential and preformation factor.

  20. Bayesian Estimation of Multidimensional Item Response Models. A Comparison of Analytic and Simulation Algorithms

    ERIC Educational Resources Information Center

    Martin-Fernandez, Manuel; Revuelta, Javier

    2017-01-01

    This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…

  1. Resistance Distances and Kirchhoff Index in Generalised Join Graphs

    NASA Astrophysics Data System (ADS)

    Chen, Haiyan

    2017-03-01

    The resistance distance between any two vertices of a connected graph is defined as the effective resistance between them in the electrical network constructed from the graph by replacing each edge with a unit resistor. The Kirchhoff index of a graph is defined as the sum of all the resistance distances between any pair of vertices of the graph. Let G=H[G1, G2, …, Gk ] be the generalised join graph of G1, G2, …, Gk determined by H. In this paper, we first give formulae for resistance distances and Kirchhoff index of G in terms of parameters of G'is {G'_i}s and H. Then, we show that computing resistance distances and Kirchhoff index of G can be decomposed into simpler ones. Finally, we obtain explicit formulae for resistance distances and Kirchhoff index of G when G'is {G'_i}s and H take some special graphs, such as the complete graph, the path, and the cycle.

  2. He 3 -Xe 129 Comagnetometery using Rb 87 Detection and Decoupling

    NASA Astrophysics Data System (ADS)

    Limes, M. E.; Sheng, D.; Romalis, M. V.

    2018-01-01

    We describe a He 3 -Xe 129 comagnetometer using Rb 87 atoms for noble-gas spin polarization and detection. We use a train of Rb 87 π pulses and σ+/σ- optical pumping to realize a finite-field Rb magnetometer with suppression of spin-exchange relaxation. We suppress frequency shifts from polarized Rb by measuring the He 3 and Xe 129 spin precession frequencies in the dark, while applying π pulses along two directions to depolarize Rb atoms. The plane of the π pulses is rotated to suppress the Bloch-Siegert shifts for the nuclear spins. We measure the ratio of He 3 to Xe 129 spin precession frequencies with sufficient absolute accuracy to resolve Earth's rotation without changing the orientation of the comagnetometer. A frequency resolution of 7 nHz is achieved after integration for 8 h without evidence of significant drift.

  3. Reduction of non-Betalactam Antibiotics COD by Combined Coagulation and Advanced Oxidation Processes.

    PubMed

    Yazdanbakhsh, Ahmad Reza; Mohammadi, Amir Sheikh; Alinejad, Abdol Azim; Hassani, Ghasem; Golmohammadi, Sohrab; Mohseni, Seyed Mohsen; Sardar, Mahdieh; Sarsangi, Vali

    2016-11-01

      The present study evaluates the reduction of antibiotic COD from wastewater by combined coagulation and advanced oxidation processes (AOPS). The reduction of Azithromycin COD by combined coagulation and Fenton-like processes reached a maximum 96.9% at a reaction time of 30 min, dosage of ferric chloride 120 mg/L, dosages of Fe0 and H2O2of 0.36mM/L and 0.38 mM/L, respectively. Also, 97.9% of Clarithromycin COD reduction, was achieved at a reaction time of 30 min, dosage of ferric chloride 120 mg/L, dosages of Fe0 and H2O2 of 0.3 mM/L and 0.3mM/L, respectively. The results of kinetic studies were best fitted to the pseudo first order equation. The results showed a higher rate constant value for combined coagulation and Fenton-like processes [(kap = 0.022 min-1 and half-life time of 31.5 min for Azithromycin) and (kap = 0.023 min-1 and half-life time of 30.1 min for Clarithromycin)].

  4. Neutrino-Induced Nucleosynthesis in Helium Shells of Early Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Banerjee, Projjwal; Qian, Yong-Zhong; Heger, Alexander; Haxton, Wick

    2016-02-01

    We summarize our studies on neutrino-driven nucleosynthesis in He shells of early core-collapse supernovae with metallicities of Z ≲ 10-3 Z⊙. We find that for progenitors of ˜ 11-15 M⊙, the neutrons released by 4He(ν¯ee, e+n)3H in He shells can be captured to produce nuclei with mass numbers up to A ˜ 200. This mechanism is sensitive to neutrino emission spectra and flavor oscillations. In addition, we find two new primary mechanisms for neutrino-induced production of 9Be in He shells. The first mechanism produces 9Be via 7Li(n,γ)8Li(n,γ)9Li(e- ν¯ee)9Be and relies on a low explosion energy for its survival. The second mechanism operates in progenitors of ˜ 8 M⊙, where 9Be can be produced directly via 7Li(3H, n0)9Be during the rapid expansion of the shocked Heshell material. The light nuclei 7Li and 3H involved in these mechanisms are produced by neutrino interactions with 4He. We discuss the implications of neutrino-induced nucleosynthesis in He shells for interpreting the elemental abundances in metal-poor stars.

  5. Generalized Poisson-Kac Processes: Basic Properties and Implications in Extended Thermodynamics and Transport

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro

    2016-04-01

    We introduce a new class of stochastic processes in Rn,{{{mathbb R}}^n}, referred to as generalized Poisson-Kac (GPK) processes, that generalizes the Poisson-Kac telegrapher's random motion in higher dimensions. These stochastic processes possess finite propagation velocity, almost everywhere smooth trajectories, and converge in the Kac limit to Brownian motion. GPK processes are defined by coupling the selection of a bounded velocity vector from a family of N distinct ones with a Markovian dynamics controlling probabilistically this selection. This model can be used as a probabilistic tool for a stochastically consistent formulation of extended thermodynamic theories far from equilibrium.

  6. Electron emission and recoil effects following the beta decay of He6

    NASA Astrophysics Data System (ADS)

    Schulhoff, Eva E.; Drake, G. W. F.

    2015-11-01

    Probabilities for atomic electron excitation (shake-up) and ionization (shake-off) are studied following the beta-decay process →Li+6He6+e-+ν¯e , and in particular, recoil-induced contributions to the shake-off probability are calculated within the nonrelativistic sudden approximation. A pseudostate expansion method together with Stieltjes imaging is used to represent the complete two-electron spectrum of final Li+6 ,Li26+, and Li36+ states. Results for the recoil correction show a 7 σ disagreement with the experiment of Carlson et al. [Phys. Rev. 129, 2220 (1963), 10.1103/PhysRev.129.2220]. A variety of sum rules, including a newly derived Thomas-Reich-Kuhn oscillator strength sum rule for dipole recoil terms, provides tight constraints on the accuracy of the results. Calculations are performed for the helium 1 s 2 s 3S metastable state, as well as for the 1 s21S ground state. Our results would reduce the recoil-induced correction to the measured electroneutrino coupling constant ae μ from the apparent 0.6% used in the experiments to 0.09%.

  7. Estimation of heat loss from a cylindrical cavity receiver based on simultaneous energy and exergy analyses

    NASA Astrophysics Data System (ADS)

    Madadi, Vahid; Tavakoli, Touraj; Rahimi, Amir

    2015-03-01

    This study undertakes the experimental and theoretical investigation of heat losses from a cylindrical cavity receiver employed in a solar parabolic dish collector. Simultaneous energy and exergy equations are used for a thermal performance analysis of the system. The effects of wind speed and its direction on convection loss has also been investigated. The effects of operational parameters, such as heat transfer fluid mass flow rate and wind speed, and structural parameters, such as receiver geometry and inclination, are investigated. The portion of radiative heat loss is less than 10%. An empirical and simplified correlation for estimating the dimensionless convective heat transfer coefficient in terms of the Re mathrm {Re} number and the average receiver wall temperature is proposed. This correlation is applicable for a wind speed range of 0.10.1 to 10 m/s. Moreover, the proposed correlation for Nu mathrm {Nu} number is validated using experimental data obtained through the experiments carried out with a conical receiver with two aperture diameters. The coefficient of determination R2 and the normalized root mean square error (NRMSE) parameters were calculated, and the results show that there is a good agreement between predicted results and experimental data. R2 is greater than 0.950.95 and the NRMSE parameters is less than 0.060.06 in this analysis.

  8. Improved Limits on Axionlike-Particle-Mediated P , T -Violating Interactions between Electrons and Nucleons from Electric Dipole Moments of Atoms and Molecules

    NASA Astrophysics Data System (ADS)

    Stadnik, Y. V.; Dzuba, V. A.; Flambaum, V. V.

    2018-01-01

    In the presence of P , T -violating interactions, the exchange of axionlike particles between electrons and nucleons in atoms and molecules induces electric dipole moments (EDMs) of atoms and molecules. We perform calculations of such axion-exchange-induced atomic EDMs using the relativistic Hartree-Fock-Dirac method including electron core polarization corrections. We present analytical estimates to explain the dependence of these induced atomic EDMs on the axion mass and atomic parameters. From the experimental bounds on the EDMs of atoms and molecules, including Cs 133 , Tl 205 , Xe 129 , Hg 199 , Yb 171 F 19 , Hf 180 F+ 19 , and Th 232 O 16 , we constrain the P , T -violating scalar-pseudoscalar nucleon-electron and electron-electron interactions mediated by a generic axionlike particle of arbitrary mass. Our limits improve on existing laboratory bounds from other experiments by many orders of magnitude for ma≳10-2 eV . We also place constraints on C P violation in certain types of relaxion models.

  9. High-Precision Mass Measurement of Cu 56 and the Redirection of the r p -Process Flow

    NASA Astrophysics Data System (ADS)

    Valverde, A. A.; Brodeur, M.; Bollen, G.; Eibach, M.; Gulyuz, K.; Hamaker, A.; Izzo, C.; Ong, W.-J.; Puentes, D.; Redshaw, M.; Ringle, R.; Sandler, R.; Schwarz, S.; Sumithrarachchi, C. S.; Surbrook, J.; Villari, A. C. C.; Yandow, I. T.

    2018-01-01

    We report the mass measurement of Cu 56 , using the LEBIT 9.4 T Penning trap mass spectrometer at the National Superconducting Cyclotron Laboratory at Michigan State University. The mass of Cu 56 is critical for constraining the reaction rates of the Ni 55 (p ,γ ) Cu 56 (p ,γ ) Zn 57 (β+) Cu 57 bypass around the Ni 56 waiting point. Previous recommended mass excess values have disagreed by several hundred keV. Our new value, ME =-38 626.7 (7.1 ) keV , is a factor of 30 more precise than the extrapolated value suggested in the 2012 atomic mass evaluation [Chin. Phys. C 36, 1603 (2012), 10.1088/1674-1137/36/12/003], and more than a factor of 12 more precise than values calculated using local mass extrapolations, while agreeing with the newest 2016 atomic mass evaluation value [Chin. Phys. C 41, 030003 (2017), 10.1088/1674-1137/41/3/030003]. The new experimental average, using our new mass and the value from AME2016, is used to calculate the astrophysical Ni 55 (p ,γ ) and Cu 56 (p ,γ ) forward and reverse rates and perform reaction network calculations of the r p process. These show that the r p -process flow redirects around the Ni 56 waiting point through the Ni 55 (p ,γ ) route, allowing it to proceed to higher masses more quickly and resulting in a reduction in ashes around this waiting point and an enhancement to higher-mass ashes.

  10. Influence of basis-set size on the X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B Σ 1 /2 +2 potential-energy curves, A Π 3 /2 2 vibrational energies, and D1 and D2 line shapes of Rb+He

    NASA Astrophysics Data System (ADS)

    Blank, L. Aaron; Sharma, Amit R.; Weeks, David E.

    2018-03-01

    The X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B2Σ1/2 + potential-energy curves for Rb+He are computed at the spin-orbit multireference configuration interaction level of theory using a hierarchy of Gaussian basis sets at the double-zeta (DZ), triple-zeta (TZ), and quadruple-zeta (QZ) levels of valence quality. Counterpoise and Davidson-Silver corrections are employed to remove basis-set superposition error and ameliorate size-consistency error. An extrapolation is performed to obtain a final set of potential-energy curves in the complete basis-set (CBS) limit. This yields four sets of systematically improved X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B2Σ1/2 + potential-energy curves that are used to compute the A Π 3 /2 2 bound vibrational energies, the position of the D2 blue satellite peak, and the D1 and D2 pressure broadening and shifting coefficients, at the DZ, TZ, QZ, and CBS levels. Results are compared with previous calculations and experimental observation.

  11. Nuclear data correlation between different isotopes via integral information

    NASA Astrophysics Data System (ADS)

    Rochman, Dimitri A.; Bauge, Eric; Vasiliev, Alexander; Ferroukhi, Hakim; Perret, Gregory

    2018-05-01

    This paper presents a Bayesian approach based on integral experiments to create correlations between different isotopes which do not appear with differential data. A simple Bayesian set of equations is presented with random nuclear data, similarly to the usual methods applied with differential data. As a consequence, updated nuclear data (cross sections, ν, fission neutron spectra and covariance matrices) are obtained, leading to better integral results. An example for 235U and 238U is proposed taking into account the Bigten criticality benchmark.

  12. Structures of p -shell double-Λ hypernuclei studied with microscopic cluster models

    NASA Astrophysics Data System (ADS)

    Kanada-En'yo, Yoshiko

    2018-03-01

    0 s -orbit Λ states in p -shell double-Λ hypernuclei (Z Λ Λ A ), Li Λ Λ 8 , Li Λ Λ 9 , Be Λ Λ 10 ,11 ,12 , B Λ Λ 12 ,13 , and C Λ Λ 14 are investigated. Microscopic cluster models are applied to core nuclear part and a potential model is adopted for Λ particles. The Λ -core potential is a folding potential obtained with effective G -matrix Λ -N interactions, which reasonably reproduce energy spectra of Z Λ A -1 . System dependence of the Λ -Λ binding energies is understood by the core polarization energy from nuclear size reduction. Reductions of nuclear sizes and E 2 transition strengths by Λ particles are also discussed.

  13. 2D Effective Electron Mass at the Fermi Level in Accumulation and Inversion Layers of MOSFET Nano Devices.

    PubMed

    Singh, S L; Singh, S B; Ghatak, K P

    2018-04-01

    In this paper an attempt is made to study the 2D Fermi Level Mass (FLM) in accumulation and inversion layers of nano MOSFET devices made of nonlinear optical, III-V, ternary, Quaternary, II-VI, IV-VI, Ge and stressed materials by formulating 2D carrier dispersion laws on the basis of k p formalism and considering the energy band constants of a particular material. It is observed taking accumulation and inversion layers of Cd3As2, CdGeAs2, InSb, Hg1-xCdxTe and In1-xGaxAsyP1-y lattice matched to InP, CdS, GaSb and Ge as examples that the FLM depends on sub band index for nano MOSFET devices made of Cd3As2 and CdGeAs2 materials which is the characteristic features such 2D systems. Besides, the FLM depends on the scattering potential in all the cases and the same mass changes with increasing surface electric field. The FLM exists in the band gap which is impossible without heavy doping.

  14. Prospects for improved understanding of isotopic reactor antineutrino fluxes

    NASA Astrophysics Data System (ADS)

    Gebre, Y.; Littlejohn, B. R.; Surukuchi, P. T.

    2018-01-01

    Predictions of antineutrino fluxes produced by fission isotopes in a nuclear reactor have recently received increased scrutiny due to observed differences in predicted and measured inverse beta decay (IBD) yields, referred to as the "reactor antineutrino flux anomaly." In this paper, global fits are applied to existing IBD yield measurements to produce constraints on antineutrino production by individual plutonium and uranium fission isotopes. We find that fits including measurements from highly U 235 -enriched cores and fits including Daya Bay's new fuel evolution result produce discrepant best-fit IBD yields for U 235 and Pu 239 . This discrepancy can be alleviated in a global analysis of all data sets through simultaneous fitting of Pu 239 , U 235 , and U 238 yields. The measured IBD yield of U 238 in this analysis is (7.02 ±1.65 )×10-43 cm2/fission , nearly two standard deviations below existing predictions. Future hypothetical IBD yield measurements by short-baseline reactor experiments are examined to determine their possible impact on the global understanding of isotopic IBD yields. It is found that future improved short-baseline IBD yield measurements at both high-enriched and low-enriched cores can significantly improve constraints for U 235 , U 238 , and Pu 239 , providing comparable or superior precision to existing conversion- and summation-based antineutrino flux predictions. Systematic and experimental requirements for these future measurements are also investigated.

  15. 63Ni (n ,γ ) cross sections measured with DANCE

    NASA Astrophysics Data System (ADS)

    Weigand, M.; Bredeweg, T. A.; Couture, A.; Göbel, K.; Heftrich, T.; Jandel, M.; Käppeler, F.; Lederer, C.; Kivel, N.; Korschinek, G.; Krtička, M.; O'Donnell, J. M.; Ostermöller, J.; Plag, R.; Reifarth, R.; Schumann, D.; Ullmann, J. L.; Wallner, A.

    2015-10-01

    The neutron capture cross section of the s -process branch nucleus 63Ni affects the abundances of other nuclei in its region, especially 63Cu and 64Zn. In order to determine the energy-dependent neutron capture cross section in the astrophysical energy region, an experiment at the Los Alamos National Laboratory has been performed using the calorimetric 4 π BaF2 array DANCE. The (n ,γ ) cross section of 63Ni has been determined relative to the well-known 197Au standard with uncertainties below 15%. Various 63Ni resonances have been identified based on the Q value. Furthermore, the s -process sensitivity of the new values was analyzed with the new network calculation tool NETZ.

  16. Spin Polarization of Rb and Cs n p P2 3/2 (n =5 , 6) Atoms by Circularly Polarized Photoexcitation of a Transient Diatomic Molecule

    NASA Astrophysics Data System (ADS)

    Mironov, A. E.; Hewitt, J. D.; Eden, J. G.

    2017-03-01

    We report the selective population of Rb or Cs n p P2 3/2 (n =5 , 6; F =4 , 5) hyperfine states by the photodissociation of a transient, alkali-rare gas diatomic molecule. Circularly polarized (σ-), amplified spontaneous emission (ASE) on the D2 line of Rb or Cs (780.0 and 852.1 nm, respectively) is generated when Rb-Xe or Cs-Xe ground state collision pairs are photoexcited by a σ+-polarized optical field having a wavelength within the D2 blue satellite continuum, associated with the B Σ2 1/2 +←X Σ2 1/2 + (free←free ) transition of the diatomic molecule. The degree of spin polarization of Cs (6 p P3/2 2 ), specifically, is found to be dependent on the interatomic distance (R ) at which the excited complex is born, a result attributed to the structure of the B Σ2 1/2 + state. For Cs-Xe atomic pairs, tuning the wavelength of the optical field from 843 to 848 nm varies the degree of circular polarization of the ASE from 63% to almost unity because of the perturbation, in the 5 ≤R ≤6 Å interval, of the Σ2 1/2 + potential by a d σ molecular orbital associated with a higher Λ 2 electronic state. Monitoring only the Cs 6 p P3/2 2 spin polarization reveals a previously unobserved interaction of CsXe (B Σ2 1/2 + ) with the lowest vibrational levels of a Λ 2 state derived from Cs (5 d )+Xe . By inserting a molecular intermediate into the alkali atom excitation mechanism, these experiments realize electronic spin polarization through populating no more than two n p P2 3/2 hyperfine states, and demonstrate a sensitive spectroscopic probe of R -dependent state-state interactions and their impact on interatomic potentials.

  17. Results from a calibration of XENON100 using a source of dissolved radon-220

    NASA Astrophysics Data System (ADS)

    Aprile, E.; Aalbers, J.; Agostini, F.; Alfonsi, M.; Amaro, F. D.; Anthony, M.; Arneodo, F.; Barrow, P.; Baudis, L.; Bauermeister, B.; Benabderrahmane, M. L.; Berger, T.; Breur, P. A.; Brown, A.; Brown, E.; Bruenner, S.; Bruno, G.; Budnik, R.; Bütikofer, L.; Calvén, J.; Cardoso, J. M. R.; Cervantes, M.; Cichon, D.; Coderre, D.; Colijn, A. P.; Conrad, J.; Cussonneau, J. P.; Decowski, M. P.; de Perio, P.; di Gangi, P.; di Giovanni, A.; Diglio, S.; Duchovni, E.; Eurin, G.; Fei, J.; Ferella, A. D.; Fieguth, A.; Franco, D.; Fulgione, W.; Gallo Rosso, A.; Galloway, M.; Gao, F.; Garbini, M.; Geis, C.; Goetzke, L. W.; Grandi, L.; Greene, Z.; Grignon, C.; Hasterok, C.; Hogenbirk, E.; Itay, R.; Kaminsky, B.; Kessler, G.; Kish, A.; Landsman, H.; Lang, R. F.; Lellouch, D.; Levinson, L.; Le Calloch, M.; Lin, Q.; Lindemann, S.; Lindner, M.; Lopes, J. A. M.; Manfredini, A.; Maris, I.; Marrodán Undagoitia, T.; Masbou, J.; Massoli, F. V.; Masson, D.; Mayani, D.; Meng, Y.; Messina, M.; Micheneau, K.; Miguez, B.; Molinario, A.; Murra, M.; Naganoma, J.; Ni, K.; Oberlack, U.; Orrigo, S. E. A.; Pakarha, P.; Pelssers, B.; Persiani, R.; Piastra, F.; Pienaar, J.; Piro, M.-C.; Plante, G.; Priel, N.; Rauch, L.; Reichard, S.; Reuter, C.; Rizzo, A.; Rosendahl, S.; Rupp, N.; Saldanha, R.; Dos Santos, J. M. F.; Sartorelli, G.; Scheibelhut, M.; Schindler, S.; Schreiner, J.; Schumann, M.; Scotto Lavina, L.; Selvi, M.; Shagin, P.; Shockley, E.; Silva, M.; Simgen, H.; Sivers, M. V.; Stein, A.; Thers, D.; Tiseni, A.; Trinchero, G.; Tunnell, C.; Upole, N.; Wang, H.; Wei, Y.; Weinheimer, C.; Wulf, J.; Ye, J.; Zhang, Y.; Xenon Collaboration

    2017-04-01

    A Rn 220 source is deployed on the XENON100 dark matter detector in order to address the challenges in calibration of tonne-scale liquid noble element detectors. We show that the Pb 212 beta emission can be used for low-energy electronic recoil calibration in searches for dark matter. The isotope spreads throughout the entire active region of the detector, and its activity naturally decays below background level within a week after the source is closed. We find no increase in the activity of the troublesome Rn 222 background after calibration. Alpha emitters are also distributed throughout the detector and facilitate calibration of its response to Rn 222 . Using the delayed coincidence of Rn 220 - Po 216 , we map for the first time the convective motion of particles in the XENON100 detector. Additionally, we make a competitive measurement of the half-life of Po 212 , t1 /2=(293.9 ±(1.0 )stat±(0.6 )sys) ns .

  18. First application of combined isochronous and Schottky mass spectrometry: Half-lives of fully ionized Cr 24 + 49 and Fe 26 + 53 atoms

    NASA Astrophysics Data System (ADS)

    Tu, X. L.; Chen, X. C.; Zhang, J. T.; Shuai, P.; Yue, K.; Xu, X.; Fu, C. Y.; Zeng, Q.; Zhou, X.; Xing, Y. M.; Wu, J. X.; Mao, R. S.; Mao, L. J.; Fang, K. H.; Sun, Z. Y.; Wang, M.; Yang, J. C.; Litvinov, Yu. A.; Blaum, K.; Zhang, Y. H.; Yuan, Y. J.; Ma, X. W.; Zhou, X. H.; Xu, H. S.

    2018-01-01

    Lifetime measurements of β -decaying highly charged ions have been performed in the experimental storage ring (CSRe) by applying the isochronous Schottky mass spectrometry. The fully ionized 49Cr and 53Fe ions were produced in projectile fragmentation of 58Ni primary beam and were stored in the CSRe tuned into the isochronous ion-optical mode. The new resonant Schottky detector was applied to monitor the intensities of stored uncooled Cr 24 + 49 and Fe 26 + 53 ions. The extracted half-lives T1 /2(Cr 24 + 49 ) =44.0 (27 ) min and T1 /2(Fe 26 + 53 ) =8.47 (19 ) min are in excellent agreement with the literature half-life values corrected for the disabled electron capture branchings. This is an important proof-of-principle step towards realizing the simultaneous mass and lifetime measurements on exotic nuclei at the future storage ring facilities.

  19. Skyrme density functional description of the double magic 78Ni nucleus

    NASA Astrophysics Data System (ADS)

    Brink, D. M.; Stancu, Fl.

    2018-06-01

    We calculate the single-particle spectrum of the double magic nucleus 78Ni in a Hartree-Fock approach using the Skyrme density-dependent effective interaction containing central, spin-orbit, and tensor parts. We show that the tensor part has an important effect on the spin-orbit splitting of the proton 1 f orbit that may explain the survival of magicity so far from the stability valley. We confirm the inversion of the 1 f 5 /2 and 2 p 3 /2 levels at the neutron number 48 in the Ni isotopic chain expected from previous Monte Carlo shell-model calculations and supported by experimental observation.

  20. Exploratory study of fission product yield determination from photofission of 239Pu at 11 MeV with monoenergetic photons

    NASA Astrophysics Data System (ADS)

    Bhike, Megha; Tornow, W.; Krishichayan, Tonchev, A. P.

    2017-02-01

    Measurements of fission product yields play an important role for the understanding of fundamental aspects of the fission process. Recently, neutron-induced fission product-yield data of 239Pu at energies below 4 MeV revealed an unexpected energy dependence of certain fission fragments. In order to investigate whether this observation is prerogative to neutron-induced fission, a program has been initiated to measure fission product yields in photoinduced fission. Here we report on the first ever photofission product yield measurement with monoenergetic photons produced by Compton back-scattering of FEL photons. The experiment was performed at the High-Intensity Gamma-ray Source at Triangle Universities Nuclear Laboratory on 239Pu at Eγ=11 MeV. In this exploratory study the yield of eight fission products ranging from 91Sr to 143Ce has been obtained.

  1. Limit on Tensor Currents from Li 8 β Decay

    NASA Astrophysics Data System (ADS)

    Sternberg, M. G.; Segel, R.; Scielzo, N. D.; Savard, G.; Clark, J. A.; Bertone, P. F.; Buchinger, F.; Burkey, M.; Caldwell, S.; Chaudhuri, A.; Crawford, J. E.; Deibel, C. M.; Greene, J.; Gulick, S.; Lascar, D.; Levand, A. F.; Li, G.; Pérez Galván, A.; Sharma, K. S.; Van Schelt, J.; Yee, R. M.; Zabransky, B. J.

    2015-10-01

    In the standard model, the weak interaction is formulated with a purely vector-axial-vector (V -A ) structure. Without restriction on the chirality of the neutrino, the most general limits on tensor currents from nuclear β decay are dominated by a single measurement of the β -ν ¯ correlation in He 6 β decay dating back over a half century. In the present work, the β -ν ¯ -α correlation in the β decay of Li 8 and subsequent α -particle breakup of the Be8 * daughter was measured. The results are consistent with a purely V -A interaction and in the case of couplings to right-handed neutrinos (CT=-CT' ) limits the tensor fraction to |CT/CA|2<0.011 (95.5% C.L.). The measurement confirms the He 6 result using a different nuclear system and employing modern ion-trapping techniques subject to different systematic uncertainties.

  2. Half-life of the 15 /2+ state of 135I: A test of E 2 seniority relations

    NASA Astrophysics Data System (ADS)

    Spagnoletti, P.; Simpson, G. S.; Carroll, R.; Régis, J.-M.; Blanc, A.; Jentschel, M.; Köster, U.; Mutti, P.; Soldner, T.; de France, G.; Ur, C. A.; Urban, W.; Bruce, A. M.; Drouet, F.; Fraile, L. M.; Gaffney, L. P.; Ghitǎ, D. G.; Ilieva, S.; Jolie, J.; Korten, W.; Kröll, T.; Larijarni, C.; Lalkovski, S.; Licǎ, R.; Mach, H.; Mǎrginean, N.; Paziy, V.; Podolyák, Zs.; Regan, P. H.; Scheck, M.; Saed-Samii, N.; Thiamova, G.; Townsley, C.; Vancraeyenest, A.; Vedia, V.; Gargano, A.; Van Isacker, P.

    2017-02-01

    The half-life of the 15 /21+ state of the 3-valence-proton nucleus 135I has been measured to be 1.74(8) ns using the EXILL-FATIMA mixed array of Ge and LaBr3 detectors. The nuclei were produced following the cold neutron-induced fission of a 235U target at the PF1B beam line of the Institut Laue-Langevin. The extracted B (E 2 ;15 /2+→11 /2+) value enabled a test of seniority relations for the first time between E 2 transition rates. Large-scale shell-model calculations were performed for 134Te and 135I, and reinterpreted in a single-orbit approach. The results show that the two-body component of the E 2 operator can be large whereas energy shifts due to the three-body component of the effective interaction are small.

  3. First Result on the Neutrinoless Double-β Decay of Se 82 with CUPID-0

    NASA Astrophysics Data System (ADS)

    Azzolini, O.; Barrera, M. T.; Beeman, J. W.; Bellini, F.; Beretta, M.; Biassoni, M.; Brofferio, C.; Bucci, C.; Canonica, L.; Capelli, S.; Cardani, L.; Carniti, P.; Casali, N.; Cassina, L.; Clemenza, M.; Cremonesi, O.; Cruciani, A.; D'Addabbo, A.; Dafinei, I.; Di Domizio, S.; Ferroni, F.; Gironi, L.; Giuliani, A.; Gorla, P.; Gotti, C.; Keppel, G.; Marini, L.; Martinez, M.; Morganti, S.; Nagorny, S.; Nastasi, M.; Nisi, S.; Nones, C.; Orlandi, D.; Pagnanini, L.; Pallavicini, M.; Palmieri, V.; Pattavina, L.; Pavan, M.; Pessina, G.; Pettinacci, V.; Pirro, S.; Pozzi, S.; Previtali, E.; Puiu, A.; Reindl, F.; Rusconi, C.; Schäffner, K.; Tomei, C.; Vignati, M.; Zolotarova, A. S.

    2018-06-01

    We report the result of the search for neutrinoless double beta decay of Se 82 obtained with CUPID-0, the first large array of scintillating Zn Se 82 cryogenic calorimeters implementing particle identification. We observe no signal in a 1.83 kg yr Se 82 exposure, and we set the most stringent lower limit on the 0 ν β β Se 82 half-life T1/2 0 ν>2.4 ×1024 yr (90% credible interval), which corresponds to an effective Majorana neutrino mass mβ β<(376 - 770 ) meV depending on the nuclear matrix element calculations. The heat-light readout provides a powerful tool for the rejection of α particles and allows us to suppress the background in the region of interest down to (3.6-1.4+1.9)×10-3 counts /(keV kg yr ) , an unprecedented level for this technique.

  4. Deglacial temperature history of West Antarctica

    NASA Astrophysics Data System (ADS)

    Cuffey, Kurt M.; Clow, Gary D.; Steig, Eric J.; Buizert, Christo; Fudge, T. J.; Koutnik, Michelle; Waddington, Edwin D.; Alley, Richard B.; Severinghaus, Jeffrey P.

    2016-12-01

    The most recent glacial to interglacial transition constitutes a remarkable natural experiment for learning how Earth’s climate responds to various forcings, including a rise in atmospheric CO2. This transition has left a direct thermal remnant in the polar ice sheets, where the exceptional purity and continual accumulation of ice permit analyses not possible in other settings. For Antarctica, the deglacial warming has previously been constrained only by the water isotopic composition in ice cores, without an absolute thermometric assessment of the isotopes’ sensitivity to temperature. To overcome this limitation, we measured temperatures in a deep borehole and analyzed them together with ice-core data to reconstruct the surface temperature history of West Antarctica. The deglacial warming was 11.3±1.811.3±1.8∘C, approximately two to three times the global average, in agreement with theoretical expectations for Antarctic amplification of planetary temperature changes. Consistent with evidence from glacier retreat in Southern Hemisphere mountain ranges, the Antarctic warming was mostly completed by 15 kyBP, several millennia earlier than in the Northern Hemisphere. These results constrain the role of variable oceanic heat transport between hemispheres during deglaciation and quantitatively bound the direct influence of global climate forcings on Antarctic temperature. Although climate models perform well on average in this context, some recent syntheses of deglacial climate history have underestimated Antarctic warming and the models with lowest sensitivity can be discounted.

  5. Improved Limit on Neutrinoless Double-β Decay of Ge 76 from GERDA Phase II

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Baudis, L.; Bauer, C.; Bellotti, E.; Belogurov, S.; Bettini, A.; Bezrukov, L.; Biernat, J.; Bode, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; Comellato, T.; D'Andrea, V.; Demidova, E. V.; di Marco, N.; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Gangapshev, A.; Garfagnini, A.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hakenmüller, J.; Hegai, A.; Heisel, M.; Hemmer, S.; Hiller, R.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kermaidic, Y.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Kish, A.; Klimenko, A.; Kneißl, R.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Miloradovic, M.; Mingazheva, R.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Panas, K.; Pandola, L.; Pelczar, K.; Pertoldi, L.; Pullia, A.; Ransom, C.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salamida, F.; Schmitt, C.; Schneider, B.; Schönert, S.; Schütz, A.-K.; Schulz, O.; Schwingenheuer, B.; Selivanenko, O.; Shevchik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Wegmann, A.; Wester, T.; Wiesinger, C.; Wojcik, M.; Yanovich, E.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zschocke, A.; Zsigmond, A. J.; Zuber, K.; Zuzel, G.; Gerda Collaboration

    2018-03-01

    The GERDA experiment searches for the lepton-number-violating neutrinoless double-β decay of Ge 76 (Ge 76 Se 76 +2 e- ) operating bare Ge diodes with an enriched Ge 76 fraction in liquid argon. The exposure for broad-energy germanium type (BEGe) detectors is increased threefold with respect to our previous data release. The BEGe detectors feature an excellent background suppression from the analysis of the time profile of the detector signals. In the analysis window a background level of 1. 0-0.4+0.6×10-3 counts /(keV kg yr ) has been achieved; if normalized to the energy resolution this is the lowest ever achieved in any 0 νβ β experiment. No signal is observed and a new 90% C.L. lower limit for the half-life of 8.0 ×1025 yr is placed when combining with our previous data. The expected median sensitivity assuming no signal is 5.8 ×1025 yr .

  6. Individual Low-Energy Toroidal Dipole State in Mg 24

    NASA Astrophysics Data System (ADS)

    Nesterenko, V. O.; Repko, A.; Kvasil, J.; Reinhard, P.-G.

    2018-05-01

    The low-energy dipole excitations in Mg 24 are investigated within the Skyrme quasiparticle random phase approximation for axial nuclei. The calculations with the force SLy6 reveal a remarkable feature: the lowest IπK =1-1 excitation (E =7.92 MeV ) in Mg 24 is a vortical toroidal state (TS) representing a specific vortex-antivortex realization of the well-known spherical Hill's vortex in a strongly deformed axial confinement. This is a striking example of an individual TS which can be much more easily discriminated in experiment than the toroidal dipole resonance embracing many states. The TS acquires the lowest energy due to the huge prolate axial deformation in Mg 24 . The result persists for different Skyrme parametrizations (SLy6, SVbas, SkM*). We analyze spectroscopic properties of the TS and its relation with the cluster structure of Mg 24 . Similar TSs could exist in other highly prolate light nuclei. They could serve as promising tests for various reactions to probe a vortical (toroidal) nuclear flow.

  7. Spatial anisotropy of neutrons emitted from the 56Fe(γ ,n )55Fe reaction with a linearly polarized γ -ray beam

    NASA Astrophysics Data System (ADS)

    Hayakawa, T.; Shizuma, T.; Miyamoto, S.; Amano, S.; Takemoto, A.; Yamaguchi, M.; Horikawa, K.; Akimune, H.; Chiba, S.; Ogata, K.; Fujiwara, M.

    2016-04-01

    We have measured the azimuthal anisotropy of neutrons emitted from the 56Fe(γ ,n )55Fe reaction with a linearly polarized γ -ray beam generated by laser Compton scattering at NewSUBARU. Neutron yields at the polar angle of 90∘ have been measured as a function of the azimuthal angle ϕ between the detector and the linear polarization plane of the γ -ray beam. The azimuthal anisotropy of neutrons measured at ϕ =0∘ , 10∘, 25∘, 45∘, 60∘, 70∘, and 90∘ has been well reproduced using a theoretically predicted function of a +b cos(2 ϕ ) .

  8. Summation of the product of certain functions and generalized Fibonacci numbers

    NASA Astrophysics Data System (ADS)

    Chong, Chin-Yoon; Ang, Siew-Ling; Ho, C. K.

    2014-12-01

    In this paper, we derived the summation ∑ i = 0 n f(i)Ui and ∑ i = 0 ∞ f(i)Ui for certain functions f (i), where {Ui} is the generalized Fibonacci sequence defined by Un+2= pU n+1+qUn for all p,q∈ Z+ and for all non-negative integers n with the seed values U0 = 0, U1 = 1.

  9. Impact of fission neutron energies on reactor antineutrino spectra

    NASA Astrophysics Data System (ADS)

    Littlejohn, B. R.; Conant, A.; Dwyer, D. A.; Erickson, A.; Gustafson, I.; Hermanek, K.

    2018-04-01

    Recent measurements of reactor-produced antineutrino fluxes and energy spectra are inconsistent with models based on measured thermal fission beta spectra. In this paper, we examine the dependence of antineutrino production on fission neutron energy. In particular, the variation of fission product yields with neutron energy has been considered as a possible source of the discrepancies between antineutrino observations and models. In simulations of low-enriched and highly-enriched reactor core designs, we find a substantial fraction of fissions (from 5% to more than 40%) are caused by nonthermal neutrons. Using tabulated evaluations of nuclear fission and decay, we estimate the variation in antineutrino emission by the prominent fission parents U 235 , Pu 239 , and Pu 241 versus neutron energy. The differences in fission neutron energy are found to produce less than 1% variation in detected antineutrino rate per fission of U 235 , Pu 239 , and Pu 241 . Corresponding variations in the antineutrino spectrum are found to be less than 10% below 7 MeV antineutrino energy, smaller than current model uncertainties. We conclude that insufficient modeling of fission neutron energy is unlikely to be the cause of the various reactor anomalies. Our results also suggest that comparisons of antineutrino measurements at low-enriched and highly-enriched reactors can safely neglect the differences in the distributions of their fission neutron energies.

  10. Selectivity of Electronic Coherence and Attosecond Ionization Delays in Strong-Field Double Ionization

    NASA Astrophysics Data System (ADS)

    Kobayashi, Yuki; Reduzzi, Maurizio; Chang, Kristina F.; Timmers, Henry; Neumark, Daniel M.; Leone, Stephen R.

    2018-06-01

    Experiments are presented on real-time probing of coherent electron dynamics in xenon initiated by strong-field double ionization. Attosecond transient absorption measurements allow for characterization of electronic coherences as well as relative ionization timings in multiple electronic states of Xe+ and Xe2 + . A high degree of coherence g =0.4 is observed between P3 2 0-P3 0 0 of Xe2 + , whereas for other possible pairs of states the coherences are below the detection limits of the experiments. A comparison of the experimental results with numerical simulations based on an uncorrelated electron-emission model shows that the coherences produced by strong-field double ionization are more selective than predicted. Surprisingly short ionization time delays, 0.85 fs, 0.64 fs, and 0.75 fs relative to Xe+ formation, are also measured for the P2 3 , P0 3 , and P1 3 states of Xe2 + , respectively. Both the unpredicted selectivity in the formation of coherence and the subfemtosecond time delays of specific states provide new insight into correlated electron dynamics in strong-field double ionization.

  11. Medical marijuana laws and adolescent marijuana use in the United States: a systematic review and meta‐analysis

    PubMed Central

    Sarvet, Aaron L.; Wall, Melanie M.; Fink, David S.; Greene, Emily; Le, Aline; Boustead, Anne E.; Pacula, Rosalie Liccardo; Keyes, Katherine M.; Cerdá, Magdalena; Galea, Sandro

    2018-01-01

    Abstract Aims To conduct a systematic review and meta‐analysis of studies in order to estimate the effect of US medical marijuana laws (MMLs) on past‐month marijuana use prevalence among adolescents. Methods A total of 2999 papers from 17 literature sources were screened systematically. Eleven studies, developed from four ongoing large national surveys, were meta‐analyzed. Estimates of MML effects on any past‐month marijuana use prevalence from included studies were obtained from comparisons of pre–post MML changes in MML states to changes in non‐MML states over comparable time‐periods. These estimates were standardized and entered into a meta‐analysis model with fixed‐effects for each study. Heterogeneity among the study estimates by national data survey was tested with an omnibus F‐test. Estimates of effects on additional marijuana outcomes, of MML provisions (e.g. dispensaries) and among demographic subgroups were abstracted and summarized. Key methodological and modeling characteristics were also described. Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) guidelines were followed. Results None of the 11 studies found significant estimates of pre–post MML changes compared with contemporaneous changes in non‐MML states for marijuana use prevalence among adolescents. The meta‐analysis yielded a non‐significant pooled estimate (standardized mean difference) of −0.003 (95% confidence interval = −0.012, +0.007). Four studies compared MML with non‐MML states on pre‐MML differences and all found higher rates of past‐month marijuana use in MML states pre‐MML passage. Additional tests of specific MML provisions, of MML effects on additional marijuana outcomes and among subgroups generally yielded non‐significant results, although limited heterogeneity may warrant further study. Conclusions Synthesis of the current evidence does not support the hypothesis that US medical marijuana laws (MMLs) until 2014 have led to increases in adolescent marijuana use prevalence. Limited heterogeneity exists among estimates of effects of MMLs on other patterns of marijuana use, of effects within particular population subgroups and of effects of specific MML provisions. PMID:29468763

  12. US Adult Illicit Cannabis Use, Cannabis Use Disorder, and Medical Marijuana Laws: 1991-1992 to 2012-2013.

    PubMed

    Hasin, Deborah S; Sarvet, Aaron L; Cerdá, Magdalena; Keyes, Katherine M; Stohl, Malka; Galea, Sandro; Wall, Melanie M

    2017-06-01

    Over the last 25 years, illicit cannabis use and cannabis use disorders have increased among US adults, and 28 states have passed medical marijuana laws (MML). Little is known about MML and adult illicit cannabis use or cannabis use disorders considered over time. To present national data on state MML and degree of change in the prevalence of cannabis use and disorders. Differences in the degree of change between those living in MML states and other states were examined using 3 cross-sectional US adult surveys: the National Longitudinal Alcohol Epidemiologic Survey (NLAES; 1991-1992), the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC; 2001-2002), and the National Epidemiologic Survey on Alcohol and Related Conditions-III (NESARC-III; 2012-2013). Early-MML states passed MML between NLAES and NESARC ("earlier period"). Late-MML states passed MML between NESARC and NESARC-III ("later period"). Past-year illicit cannabis use and DSM-IV cannabis use disorder. Overall, from 1991-1992 to 2012-2013, illicit cannabis use increased significantly more in states that passed MML than in other states (1.4-percentage point more; SE, 0.5; P = .004), as did cannabis use disorders (0.7-percentage point more; SE, 0.3; P = .03). In the earlier period, illicit cannabis use and disorders decreased similarly in non-MML states and in California (where prevalence was much higher to start with). In contrast, in remaining early-MML states, the prevalence of use and disorders increased. Remaining early-MML and non-MML states differed significantly for use (by 2.5 percentage points; SE, 0.9; P = .004) and disorder (1.1 percentage points; SE, 0.5; P = .02). In the later period, illicit use increased by the following percentage points: never-MML states, 3.5 (SE, 0.5); California, 5.3 (SE, 1.0); Colorado, 7.0 (SE, 1.6); other early-MML states, 2.6 (SE, 0.9); and late-MML states, 5.1 (SE, 0.8). Compared with never-MML states, increases in use were significantly greater in late-MML states (1.6-percentage point more; SE, 0.6; P = .01), California (1.8-percentage point more; SE, 0.9; P = .04), and Colorado (3.5-percentage point more; SE, 1.5; P = .03). Increases in cannabis use disorder, which was less prevalent, were smaller but followed similar patterns descriptively, with change greater than never-MML states in California (1.0-percentage point more; SE, 0.5; P = .06) and Colorado (1.6-percentage point more; SE, 0.8; P = .04). Medical marijuana laws appear to have contributed to increased prevalence of illicit cannabis use and cannabis use disorders. State-specific policy changes may also have played a role. While medical marijuana may help some, cannabis-related health consequences associated with changes in state marijuana laws should receive consideration by health care professionals and the public.

  13. Nuclear polarization effects in big bang nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Voronchev, Victor T.; Nakao, Yasuyuki

    2015-10-01

    A standard nuclear reaction network for big bang nucleosynthesis (BBN) simulations operates with spin-averaged nuclear inputs—unpolarized reaction cross sections. At the same time, the major part of reactions controlling the abundances of light elements is spin dependent, i.e., their cross sections depend on the mutual orientation of reacting particle spins. Primordial magnetic fields in the BBN epoch may to a certain degree polarize particles and thereby affect some reactions between them, introducing uncertainties in standard BBN predictions. To clarify the points, we have examined the effects of induced polarization on key BBN reactions—p (n ,γ )d , d (d ,p )t , d (d ,n )He 3 , t (d ,n )α , He 3 (n ,p )t , He 3 (d ,p )α , Li 7 (p ,α )α , Be 7 (n ,p )Li 7 —and the abundances of elements with A ≤7 . It has been obtained that the magnetic field with the strength B0≤1012 G (at the temperature of 109 K ) has almost no effect on the reaction cross sections, and the spin polarization mechanism plays a minor role in the element production, changing the abundances at most by 0.01%. However, if the magnetic field B0 reaches 1015 G its effect on the key reactions appears and becomes appreciable at B0≳1016 G . In particular, it has been found that such a field can increase the p (n ,γ )d cross section (relevant to the starting point of BBN) by a factor of 2 and at the same time almost block the He 3 (n ,p )t reaction responsible for the interconversion of A =3 nuclei in the early Universe. This suggests that the spin polarization effects may become important in nonstandard scenarios of BBN considering the existence of local magnetic bubbles inside which the field can reach ˜1015 G .

  14. Distillation with Sublogarithmic Overhead

    NASA Astrophysics Data System (ADS)

    Hastings, Matthew B.; Haah, Jeongwan

    2018-02-01

    It has been conjectured that, for any distillation protocol for magic states for the T gate, the number of noisy input magic states required per output magic state at output error rate ɛ is Ω [log (1 /ɛ )] . We show that this conjecture is false. We find a family of quantum error correcting codes of parameters ⟦ ∑ i =w +1 m (m i ),∑ i =0 w (m i ),∑ i =w +1 r +1 (r +1 i )⟧ for any integers m >2 r , r >w ≥0 , by puncturing quantum Reed-Muller codes. When m >ν r , our code admits a transversal logical gate at the ν th level of Clifford hierarchy. In a distillation protocol for magic states at the level ν =3 (T gate), the ratio of input to output magic states is O (logγ(1 /ɛ )) , where γ =log (n /k )/log (d )<0.678 for some m , r , w . The smallest code in our family for which γ <1 is on ≈258 qubits.

  15. Imaging the He2 quantum halo state using a free electron laser

    NASA Astrophysics Data System (ADS)

    Zeller, Stefan; Kunitski, Maksim; Voigtsberger, Jörg; Kalinin, Anton; Schottelius, Alexander; Schober, Carl; Waitz, Markus; Sann, Hendrik; Hartung, Alexander; Bauer, Tobias; Pitzer, Martin; Trinter, Florian; Goihl, Christoph; Janke, Christian; Richter, Martin; Kastirke, Gregor; Weller, Miriam; Czasch, Achim; Kitzler, Markus; Braune, Markus; Grisenti, Robert E.; Schöllkopf, Wieland; Schmidt, Lothar Ph. H.; Schöffler, Markus S.; Williams, Joshua B.; Jahnke, Till; Dörner, Reinhard

    2016-12-01

    Quantum tunneling is a ubiquitous phenomenon in nature and crucial for many technological applications. It allows quantum particles to reach regions in space which are energetically not accessible according to classical mechanics. In this “tunneling region,” the particle density is known to decay exponentially. This behavior is universal across all energy scales from nuclear physics to chemistry and solid state systems. Although typically only a small fraction of a particle wavefunction extends into the tunneling region, we present here an extreme quantum system: a gigantic molecule consisting of two helium atoms, with an 80% probability that its two nuclei will be found in this classical forbidden region. This circumstance allows us to directly image the exponentially decaying density of a tunneling particle, which we achieved for over two orders of magnitude. Imaging a tunneling particle shows one of the few features of our world that is truly universal: the probability to find one of the constituents of bound matter far away is never zero but decreases exponentially. The results were obtained by Coulomb explosion imaging using a free electron laser and furthermore yielded He2’s binding energy of 151.9±13.3151.9±13.3 neV, which is in agreement with most recent calculations.

  16. The development of MML (Medical Markup Language) version 3.0 as a medical document exchange format for HL7 messages.

    PubMed

    Guo, Jinqiu; Takada, Akira; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki

    2004-12-01

    Medical Markup Language (MML), as a set of standards, has been developed over the last 8 years to allow the exchange of medical data between different medical information providers. MML Version 2.21 used XML as a metalanguage and was announced in 1999. In 2001, MML was updated to Version 2.3, which contained 12 modules. The latest version--Version 3.0--is based on the HL7 Clinical Document Architecture (CDA). During the development of this new version, the structure of MML Version 2.3 was analyzed, subdivided into several categories, and redefined so the information defined in MML could be described in HL7 CDA Level One. As a result of this development, it has become possible to exchange MML Version 3.0 medical documents via HL7 messages.

  17. US Adult Illicit Cannabis Use, Cannabis Use Disorder, and Medical Marijuana Laws

    PubMed Central

    Sarvet, Aaron L.; Cerdá, Magdalena; Keyes, Katherine M.; Stohl, Malka; Galea, Sandro; Wall, Melanie M.

    2017-01-01

    Importance Over the last 25 years, illicit cannabis use and cannabis use disorders have increased among US adults, and 28 states have passed medical marijuana laws (MML). Little is known about MML and adult illicit cannabis use or cannabis use disorders considered over time. Objective To present national data on state MML and degree of change in the prevalence of cannabis use and disorders. Design, Participants, and Setting Differences in the degree of change between those living in MML states and other states were examined using 3 cross-sectional US adult surveys: the National Longitudinal Alcohol Epidemiologic Survey (NLAES; 1991-1992), the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC; 2001-2002), and the National Epidemiologic Survey on Alcohol and Related Conditions–III (NESARC-III; 2012-2013). Early-MML states passed MML between NLAES and NESARC (“earlier period”). Late-MML states passed MML between NESARC and NESARC-III (“later period”). Main Outcomes and Measures Past-year illicit cannabis use and DSM-IV cannabis use disorder. Results Overall, from 1991-1992 to 2012-2013, illicit cannabis use increased significantly more in states that passed MML than in other states (1.4–percentage point more; SE, 0.5; P = .004), as did cannabis use disorders (0.7–percentage point more; SE, 0.3; P = .03). In the earlier period, illicit cannabis use and disorders decreased similarly in non-MML states and in California (where prevalence was much higher to start with). In contrast, in remaining early-MML states, the prevalence of use and disorders increased. Remaining early-MML and non-MML states differed significantly for use (by 2.5 percentage points; SE, 0.9; P = .004) and disorder (1.1 percentage points; SE, 0.5; P = .02). In the later period, illicit use increased by the following percentage points: never-MML states, 3.5 (SE, 0.5); California, 5.3 (SE, 1.0); Colorado, 7.0 (SE, 1.6); other early-MML states, 2.6 (SE, 0.9); and late-MML states, 5.1 (SE, 0.8). Compared with never-MML states, increases in use were significantly greater in late-MML states (1.6–percentage point more; SE, 0.6; P = .01), California (1.8–percentage point more; SE, 0.9; P = .04), and Colorado (3.5–percentage point more; SE, 1.5; P = .03). Increases in cannabis use disorder, which was less prevalent, were smaller but followed similar patterns descriptively, with change greater than never-MML states in California (1.0–percentage point more; SE, 0.5; P = .06) and Colorado (1.6–percentage point more; SE, 0.8; P = .04). Conclusions and Relevance Medical marijuana laws appear to have contributed to increased prevalence of illicit cannabis use and cannabis use disorders. State-specific policy changes may also have played a role. While medical marijuana may help some, cannabis-related health consequences associated with changes in state marijuana laws should receive consideration by health care professionals and the public. PMID:28445557

  18. Probing the Single-Particle Character of Rotational States in F 19 Using a Short-Lived Isomeric Beam

    NASA Astrophysics Data System (ADS)

    Santiago-Gonzalez, D.; Auranen, K.; Avila, M. L.; Ayangeakaa, A. D.; Back, B. B.; Bottoni, S.; Carpenter, M. P.; Chen, J.; Deibel, C. M.; Hood, A. A.; Hoffman, C. R.; Janssens, R. V. F.; Jiang, C. L.; Kay, B. P.; Kuvin, S. A.; Lauer, A.; Schiffer, J. P.; Sethi, J.; Talwar, R.; Wiedenhöver, I.; Winkelbauer, J.; Zhu, S.

    2018-03-01

    A beam containing a substantial component of both the Jπ=5+ , T1 /2=162 ns isomeric state of F 18 and its 1+, 109.77-min ground state is utilized to study members of the ground-state rotational band in F 19 through the neutron transfer reaction (d ,p ) in inverse kinematics. The resulting spectroscopic strengths confirm the single-particle nature of the 13 /2+ band-terminating state. The agreement between shell-model calculations using an interaction constructed within the s d shell, and our experimental results reinforces the idea of a single-particle-collective duality in the descriptions of the structure of atomic nuclei.

  19. Rate-weakening friction characterizes both slow sliding and catastrophic failure of landslides

    NASA Astrophysics Data System (ADS)

    Handwerger, Alexander L.; Rempel, Alan W.; Skarbek, Rob M.; Roering, Joshua J.; Hilley, George E.

    2016-09-01

    Catastrophic landslides cause billions of dollars in damages and claim thousands of lives annually, whereas slow-moving landslides with negligible inertia dominate sediment transport on many weathered hillslopes. Surprisingly, both failure modes are displayed by nearby landslides (and individual landslides in different years) subjected to almost identical environmental conditions. Such observations have motivated the search for mechanisms that can cause slow-moving landslides to transition via runaway acceleration to catastrophic failure. A similarly diverse range of sliding behavior, including earthquakes and slow-slip events, occurs along tectonic faults. Our understanding of these phenomena has benefitted from mechanical treatments that rely upon key ingredients that are notably absent from previous landslide descriptions. Here, we describe landslide motion using a rate- and state-dependent frictional model that incorporates a nonlocal stress balance to account for the elastic response to gradients in slip. Our idealized, one-dimensional model reproduces both the displacement patterns observed in slow-moving landslides and the acceleration toward failure exhibited by catastrophic events. Catastrophic failure occurs only when the slip surface is characterized by rate-weakening friction and its lateral dimensions exceed a critical nucleation length h*h* that is shorter for higher effective stresses. However, landslides that are extensive enough to fall within this regime can nevertheless slide slowly for months or years before catastrophic failure. Our results suggest that the diversity of slip behavior observed during landslides can be described with a single model adapted from standard fault mechanics treatments.

  20. Energy Dependent Stereodynamics of the Ne (3P2)+Ar Reaction

    NASA Astrophysics Data System (ADS)

    Gordon, Sean D. S.; Zou, Junwen; Tanteri, Silvia; Jankunas, Justin; Osterwalder, Andreas

    2017-08-01

    The stereodynamics of the Ne (P2 3 )+Ar Penning and associative ionization reactions have been studied using a crossed molecular beam apparatus. The experiment uses a curved magnetic hexapole to polarize the Ne (P2 3 ) , which is then oriented with a shaped magnetic field in the region where it intersects with a beam of Ar (S 1 ) . The ratios of Penning to associative ionization were recorded over a range of collision energies from 320 to 500 cm-1 and the data were used to obtain Ω state dependent reactivities for the two reaction channels. These reactivities were found to compare favorably to those predicted in the theoretical work of Brumer et al.

  1. Evidence of Soft Dipole Resonance in Li 11 with Isoscalar Character

    NASA Astrophysics Data System (ADS)

    Kanungo, R.; Sanetullaev, A.; Tanaka, J.; Ishimoto, S.; Hagen, G.; Myo, T.; Suzuki, T.; Andreoiu, C.; Bender, P.; Chen, A. A.; Davids, B.; Fallis, J.; Fortin, J. P.; Galinski, N.; Gallant, A. T.; Garrett, P. E.; Hackman, G.; Hadinia, B.; Jansen, G.; Keefe, M.; Krücken, R.; Lighthall, J.; McNeice, E.; Miller, D.; Otsuka, T.; Purcell, J.; Randhawa, J. S.; Roger, T.; Rojas, A.; Savajols, H.; Shotter, A.; Tanihata, I.; Thompson, I. J.; Unsworth, C.; Voss, P.; Wang, Z.

    2015-05-01

    The first conclusive evidence of a dipole resonance in Li 11 having isoscalar character observed from inelastic scattering with a novel solid deuteron target is reported. The experiment was performed at the newly commissioned IRIS facility at TRIUMF. The results show a resonance peak at an excitation energy of 1.03 ±0.03 MeV with a width of 0.51 ±0.11 MeV (FWHM). The angular distribution is consistent with a dipole excitation in the distorted-wave Born approximation framework. The observed resonance energy together with shell model calculations show the first signature that the monopole tensor interaction is important in Li 11 . The first ab initio calculations in the coupled cluster framework are also presented.

  2. Disordered Berezinskii-Kosterlitz-Thouless transition and superinsulation

    NASA Astrophysics Data System (ADS)

    Sankar, S.; Vinokur, V. M.; Tripathi, V.

    2018-01-01

    We investigate the critical Berezinskii-Kosterlitz-Thouless (BKT) behavior of disordered two-dimensional Josephson-junction arrays (JJA) on the insulating side of the superconductor-insulator transition (SIT) taking into account the effect of hitherto ignored residual random dipole moments of the superconducting grains. We show that for weak Josephson coupling the model is equivalent to a Coulomb gas subjected to a disorder potential with logarithmic correlations. We demonstrate that strong enough disorder transforms the BKT divergence of the correlation length, ξBKT ∝exp(const /√{T -TBKT }) , characterizing the average distance between the unbound topological excitations of the opposite signs, into a more singular Vogel-Fulcher-Tamman (VFT) behavior, ξVFT ∝exp[const /(T -TVFT ) ] , which is viewed as a hallmark of glass transitions in glass-forming materials. We further show that the VFT criticality is a precursor of the transition into a nonergodic superinsulating state, while the BKT critical behavior implies freezing into an ergodic confined BKT state. Our finding sheds light on the yet unresolved problem of the origin of the VFT criticality.

  3. Three-Body Recombination near a Narrow Feshbach Resonance in Li 6

    NASA Astrophysics Data System (ADS)

    Li, Jiaming; Liu, Ji; Luo, Le; Gao, Bo

    2018-05-01

    We experimentally measure and theoretically analyze the three-atom recombination rate, L3, around a narrow s -wave magnetic Feshbach resonance of Li 6 - Li 6 at 543.3 G. By examining both the magnetic field dependence and, especially, the temperature dependence of L3 over a wide range of temperatures from a few μ K to above 200 μ K , we show that three-atom recombination through a narrow resonance follows a universal behavior determined by the long-range van der Waals potential and can be described by a set of rate equations in which three-body recombination proceeds via successive pairwise interactions. We expect the underlying physical picture to be applicable not only to narrow s wave resonances, but also to resonances in nonzero partial waves, and not only at ultracold temperatures, but also at much higher temperatures.

  4. Search for two-neutrino double electron capture of 124Xe with XENON100

    NASA Astrophysics Data System (ADS)

    Aprile, E.; Aalbers, J.; Agostini, F.; Alfonsi, M.; Amaro, F. D.; Anthony, M.; Arneodo, F.; Barrow, P.; Baudis, L.; Bauermeister, B.; Benabderrahmane, M. L.; Berger, T.; Breur, P. A.; Brown, A.; Brown, E.; Bruenner, S.; Bruno, G.; Budnik, R.; Bütikofer, L.; Calvén, J.; Cardoso, J. M. R.; Cervantes, M.; Cichon, D.; Coderre, D.; Colijn, A. P.; Conrad, J.; Cussonneau, J. P.; Decowski, M. P.; de Perio, P.; di Gangi, P.; di Giovanni, A.; Diglio, S.; Duchovni, E.; Fei, J.; Ferella, A. D.; Fieguth, A.; Franco, D.; Fulgione, W.; Gallo Rosso, A.; Galloway, M.; Gao, F.; Garbini, M.; Geis, C.; Goetzke, L. W.; Greene, Z.; Grignon, C.; Hasterok, C.; Hogenbirk, E.; Itay, R.; Kaminsky, B.; Kessler, G.; Kish, A.; Landsman, H.; Lang, R. F.; Lellouch, D.; Levinson, L.; Le Calloch, M.; Levy, C.; Lin, Q.; Lindemann, S.; Lindner, M.; Lopes, J. A. M.; Manfredini, A.; Marrodán Undagoitia, T.; Masbou, J.; Massoli, F. V.; Masson, D.; Mayani, D.; Meng, Y.; Messina, M.; Micheneau, K.; Miguez, B.; Molinario, A.; Murra, M.; Naganoma, J.; Ni, K.; Oberlack, U.; Orrigo, S. E. A.; Pakarha, P.; Pelssers, B.; Persiani, R.; Piastra, F.; Pienaar, J.; Piro, M.-C.; Plante, G.; Priel, N.; Rauch, L.; Reichard, S.; Reuter, C.; Rizzo, A.; Rosendahl, S.; Rupp, N.; Dos Santos, J. M. F.; Sartorelli, G.; Scheibelhut, M.; Schindler, S.; Schreiner, J.; Schumann, M.; Scotto Lavina, L.; Selvi, M.; Shagin, P.; Silva, M.; Simgen, H.; Sivers, M. V.; Stein, A.; Thers, D.; Tiseni, A.; Trinchero, G.; Tunnell, C. D.; Wall, R.; Wang, H.; Weber, M.; Wei, Y.; Weinheimer, C.; Wulf, J.; Zhang, Y.; Xenon Collaboration

    2017-02-01

    Two-neutrino double electron capture is a rare nuclear decay where two electrons are simultaneously captured from the atomic shell. For 124Xe this process has not yet been observed and its detection would provide a new reference for nuclear matrix element calculations. We have conducted a search for two-neutrino double electron capture from the K shell of 124Xe using 7636 kg d of data from the XENON100 dark matter detector. Using a Bayesian analysis we observed no significant excess above background, leading to a lower 90% credibility limit on the half-life T1 /2>6.5 ×1020 yr. We have also evaluated the sensitivity of the XENON1T experiment, which is currently being commissioned, and found a sensitivity of T1 /2>6.1 ×1022 yr after an exposure of 2 t yr .

  5. Mechanism of Film Cooling with One Inlet and Double Outlet Hole Injection at Various Turbulence Intensities

    NASA Astrophysics Data System (ADS)

    Li, Guangchao; Chen, Yukai; Kou, Zhihai; Zhang, Wei; Zhang, Guochen

    2018-03-01

    The trunk-branch hole was designed as a novel film cooling concept, which aims for improving film cooling performance by producing anti-vortex. The trunk-branch hole is easily manufactured in comparison with the expanded hole since it consists of two cylindrical holes. The effect of turbulence on the film cooling effectiveness with a trunk-branch hole injection was investigated at the blowing ratios of 0.5, 1.0, 1.5 and 2.0 by numerical simulation. The turbulence intensities from 0.4 % to 20 % were considered. The realizable k-ɛ k - ɛ turbulence model and the enhanced wall function were used. The more effective anti-vortex occurs at the low blowing ratio of 0.5 %. The high turbulence intensity causes the effectiveness evenly distributed in the spanwise direction. The increase of turbulence intensity leads to a slight decrease of the spanwise averaged effectiveness at the low blowing ratio of 0.5, but a significant increase at the high blowing ratios of 1.5 and 2.0. The optimal blowing ratio of the averaged surface effectiveness is improved from 1.0 to 1.5 when the turbulence intensity increases from 0.4 % to 20 %.

  6. Local thermal energy as a structural indicator in glasses

    NASA Astrophysics Data System (ADS)

    Zylberg, Jacques; Lerner, Edan; Bar-Sinai, Yohai; Bouchbinder, Eran

    2017-07-01

    Identifying heterogeneous structures in glasses—such as localized soft spots—and understanding structure-dynamics relations in these systems remain major scientific challenges. Here, we derive an exact expression for the local thermal energy of interacting particles (the mean local potential energy change caused by thermal fluctuations) in glassy systems by a systematic low-temperature expansion. We show that the local thermal energy can attain anomalously large values, inversely related to the degree of softness of localized structures in a glass, determined by a coupling between internal stresses—an intrinsic signature of glassy frustration—anharmonicity and low-frequency vibrational modes. These anomalously large values follow a fat-tailed distribution, with a universal exponent related to the recently observed universal ω4ω4 density of states of quasilocalized low-frequency vibrational modes. When the spatial thermal energy field—a “softness field”—is considered, this power law tail manifests itself by highly localized spots, which are significantly softer than their surroundings. These soft spots are shown to be susceptible to plastic rearrangements under external driving forces, having predictive powers that surpass those of the normal modes-based approach. These results offer a general, system/model-independent, physical/observable-based approach to identify structural properties of quiescent glasses and relate them to glassy dynamics.

  7. Double-β decay within a consistent deformed approach

    NASA Astrophysics Data System (ADS)

    Delion, D. S.; Suhonen, J.

    2015-05-01

    In this paper we present a timely application of the proton-neutron deformed quasiparticle random-phase approximation (p n -dQRPA), designed to describe in a consistent way the 1+ Gamow-Teller states in odd-odd deformed nuclei. For this purpose we apply a projection before variation procedure by using a single-particle basis with projected angular momentum, provided by the diagonalization of a spherical mean field plus quadrupole-quadrupole interaction. The residual Hamiltonian contains pairing plus proton-neutron dipole terms in particle-hole and particle-particle channels, with constant strengths. As an example we describe the two-neutrino double-beta (2 ν β β ) decay of 150Nd to the ground state of 150Sm. The experimental (p ,n ) type of strength in 150Nd and the (n ,p ) type of strength in 150Sm are reasonably reproduced and the 2 ν β β decay matrix element depicts a strong dependence upon the particle-particle strength gp p. The experimental half-life is reproduced for gp p=0.05 . It turns out that the measured half-lives for 2 ν β β transitions between other deformed superfluid partners with mass numbers A =82 ,96,100,128,130,238 are reproduced with fairly good accuracy by using this value of gp p.

  8. Dimensional crossover of effective orbital dynamics in polar distorted He 3 -A : Transitions to antispacetime

    NASA Astrophysics Data System (ADS)

    Nissinen, J.; Volovik, G. E.

    2018-01-01

    Topologically protected superfluid phases of He 3 allow one to simulate many important aspects of relativistic quantum field theories and quantum gravity in condensed matter. Here we discuss a topological Lifshitz transition of the effective quantum vacuum in which the determinant of the tetrad field changes sign through a crossing to a vacuum state with a degenerate fermionic metric. Such a transition is realized in polar distorted superfluid He 3 -A in terms of the effective tetrad fields emerging in the vicinity of the superfluid gap nodes: the tetrads of the Weyl points in the chiral A-phase of He 3 and the degenerate tetrad in the vicinity of a Dirac nodal line in the polar phase of He 3 . The continuous phase transition from the A -phase to the polar phase, i.e., the transition from the Weyl nodes to the Dirac nodal line and back, allows one to follow the behavior of the fermionic and bosonic effective actions when the sign of the tetrad determinant changes, and the effective chiral spacetime transforms to antichiral "anti-spacetime." This condensed matter realization demonstrates that while the original fermionic action is analytic across the transition, the effective action for the orbital degrees of freedom (pseudo-EM) fields and gravity have nonanalytic behavior. In particular, the action for the pseudo-EM field in the vacuum with Weyl fermions (A-phase) contains the modulus of the tetrad determinant. In the vacuum with the degenerate metric (polar phase) the nodal line is effectively a family of 2 +1 d Dirac fermion patches, which leads to a non-analytic (B2-E2)3/4 QED action in the vicinity of the Dirac line.

  9. Average CsI Neutron Density Distribution from COHERENT Data

    NASA Astrophysics Data System (ADS)

    Cadeddu, M.; Giunti, C.; Li, Y. F.; Zhang, Y. Y.

    2018-02-01

    Using the coherent elastic neutrino-nucleus scattering data of the COHERENT experiment, we determine for the first time the average neutron rms radius of Cs 133 and I 127 . We obtain the practically model-independent value Rn=5.5-1.1+0.9 fm using the symmetrized Fermi and Helm form factors. We also point out that the COHERENT data show a 2.3 σ evidence of the nuclear structure suppression of the full coherence.

  10. Optimal run-and-tumble-based transportation of a Janus particle with active steering

    NASA Astrophysics Data System (ADS)

    Mano, Tomoyuki; Delfau, Jean-Baptiste; Iwasawa, Junichiro; Sano, Masaki

    2017-03-01

    Although making artificial micrometric swimmers has been made possible by using various propulsion mechanisms, guiding their motion in the presence of thermal fluctuations still remains a great challenge. Such a task is essential in biological systems, which present a number of intriguing solutions that are robust against noisy environmental conditions as well as variability in individual genetic makeup. Using synthetic Janus particles driven by an electric field, we present a feedback-based particle-guiding method quite analogous to the “run-and-tumbling” behavior of Escherichia coli but with a deterministic steering in the tumbling phase: the particle is set to the run state when its orientation vector aligns with the target, whereas the transition to the “steering” state is triggered when it exceeds a tolerance angle αα. The active and deterministic reorientation of the particle is achieved by a characteristic rotational motion that can be switched on and off by modulating the ac frequency of the electric field, which is reported in this work. Relying on numerical simulations and analytical results, we show that this feedback algorithm can be optimized by tuning the tolerance angle αα. The optimal resetting angle depends on signal to noise ratio in the steering state, and it is shown in the experiment. The proposed method is simple and robust for targeting, despite variability in self-propelling speeds and angular velocities of individual particles.

  11. First two operational years of the electron-beam ion trap charge breeder at the National Superconducting Cyclotron Laboratory

    NASA Astrophysics Data System (ADS)

    Lapierre, A.; Bollen, G.; Crisp, D.; Krause, S. W.; Linhardt, L. E.; Lund, K.; Nash, S.; Rencsok, R.; Ringle, R.; Schwarz, S.; Steiner, M.; Sumithrarachchi, C.; Summers, T.; Villari, A. C. C.; Williams, S. J.; Zhao, Q.

    2018-05-01

    The electron-beam ion trap (EBIT) charge breeder of the ReA post-accelerator, located at the National Superconducting Cyclotron Laboratory (Michigan State University), started on-line operation in September 2015. Since then, the EBIT has delivered many pilot beams of stable isotopes and several rare-isotope beams. An operating aspect of the ReA EBIT is the breeding of high charge states to reach high reaccelerated beam energies. Efficiencies in single charge states of more than 20% were measured with K39 15 + , Rb85 27 + , K47 17 + , and Ar34 15 + . Producing high charge states demands long breeding times. This reduces the ejection frequency and, hence, increases the number of ions ejected per pulse. Another operating aspect is the ability to spread the distribution in time of the ejected ion pulses to lower the instantaneous rate delivered to experiments. Pulse widths were stretched from a natural 25 μ s up to ˜70 ms . This publication reviews the progress of the ReA EBIT system over the years and presents the results of charge-breeding efficiency measurements and pulse-stretching tests obtained with stable- and rare-isotope beams. Studies performed with high sensitivity to identify and quantify stable-isotope contaminants from the EBIT are also presented, along with a novel method for purifying beams.

  12. First Results from CUORE: A Search for Lepton Number Violation via 0 ν β β Decay of Te 130

    NASA Astrophysics Data System (ADS)

    Alduino, C.; Alessandria, F.; Alfonso, K.; Andreotti, E.; Arnaboldi, C.; Avignone, F. T.; Azzolini, O.; Balata, M.; Bandac, I.; Banks, T. I.; Bari, G.; Barucci, M.; Beeman, J. W.; Bellini, F.; Benato, G.; Bersani, A.; Biare, D.; Biassoni, M.; Bragazzi, F.; Branca, A.; Brofferio, C.; Bryant, A.; Buccheri, A.; Bucci, C.; Bulfon, C.; Camacho, A.; Caminata, A.; Canonica, L.; Cao, X. G.; Capelli, S.; Capodiferro, M.; Cappelli, L.; Cardani, L.; Cariello, M.; Carniti, P.; Carrettoni, M.; Casali, N.; Cassina, L.; Cereseto, R.; Ceruti, G.; Chiarini, A.; Chiesa, D.; Chott, N.; Clemenza, M.; Conventi, D.; Copello, S.; Cosmelli, C.; Cremonesi, O.; Crescentini, C.; Creswick, R. J.; Cushman, J. S.; D'Addabbo, A.; D'Aguanno, D.; Dafinei, I.; Datskov, V.; Davis, C. J.; Del Corso, F.; Dell'Oro, S.; Deninno, M. M.; di Domizio, S.; di Vacri, M. L.; di Paolo, L.; Drobizhev, A.; Ejzak, L.; Faccini, R.; Fang, D. Q.; Faverzani, M.; Ferri, E.; Ferroni, F.; Fiorini, E.; Franceschi, M. A.; Freedman, S. J.; Fujikawa, B. K.; Gaigher, R.; Giachero, A.; Gironi, L.; Giuliani, A.; Gladstone, L.; Goett, J.; Gorla, P.; Gotti, C.; Guandalini, C.; Guerzoni, M.; Gutierrez, T. D.; Haller, E. E.; Han, K.; Hansen, E. V.; Heeger, K. M.; Hennings-Yeomans, R.; Hickerson, K. P.; Huang, H. Z.; Iannone, M.; Ioannucci, L.; Kadel, R.; Keppel, G.; Kogler, L.; Kolomensky, Yu. G.; Leder, A.; Ligi, C.; Lim, K. E.; Liu, X.; Ma, Y. G.; Maiano, C.; Maino, M.; Marini, L.; Martinez, M.; Martinez Amaya, C.; Maruyama, R. H.; Mei, Y.; Moggi, N.; Morganti, S.; Mosteiro, P. J.; Nagorny, S. S.; Napolitano, T.; Nastasi, M.; Nisi, S.; Nones, C.; Norman, E. B.; Novati, V.; Nucciotti, A.; Nutini, I.; O'Donnell, T.; Olcese, M.; Olivieri, E.; Orio, F.; Orlandi, D.; Ouellet, J. L.; Pagliarone, C. E.; Pallavicini, M.; Palmieri, V.; Pattavina, L.; Pavan, M.; Pedretti, M.; Pedrotta, R.; Pelosi, A.; Pessina, G.; Pettinacci, V.; Piperno, G.; Pira, C.; Pirro, S.; Pozzi, S.; Previtali, E.; Reindl, F.; Rimondi, F.; Risegari, L.; Rosenfeld, C.; Rossi, C.; Rusconi, C.; Sakai, M.; Sala, E.; Salvioni, C.; Sangiorgio, S.; Santone, D.; Schaeffer, D.; Schmidt, B.; Schmidt, J.; Scielzo, N. D.; Singh, V.; Sisti, M.; Smith, A. R.; Stivanello, F.; Taffarello, L.; Tatananni, L.; Tenconi, M.; Terranova, F.; Tessaro, M.; Tomei, C.; Ventura, G.; Vignati, M.; Wagaarachchi, S. L.; Wallig, J.; Wang, B. S.; Wang, H. W.; Welliver, B.; Wilson, J.; Wilson, K.; Winslow, L. A.; Wise, T.; Zanotti, L.; Zarra, C.; Zhang, G. Q.; Zhu, B. X.; Zimmermann, S.; Zucchelli, S.; Cuore Collaboration

    2018-03-01

    The CUORE experiment, a ton-scale cryogenic bolometer array, recently began operation at the Laboratori Nazionali del Gran Sasso in Italy. The array represents a significant advancement in this technology, and in this work we apply it for the first time to a high-sensitivity search for a lepton-number-violating process: Te 130 neutrinoless double-beta decay. Examining a total TeO2 exposure of 86.3 kg yr, characterized by an effective energy resolution of (7.7 ±0.5 ) keV FWHM and a background in the region of interest of (0.014 ±0.002 ) counts /(keV kg yr ) , we find no evidence for neutrinoless double-beta decay. Including systematic uncertainties, we place a lower limit on the decay half-life of T1/2 0 ν(Te 130 )>1.3 ×1025 yr (90% C.L.); the median statistical sensitivity of this search is 7.0 ×1024 yr . Combining this result with those of two earlier experiments, Cuoricino and CUORE-0, we find T1/2 0 ν(Te 130 )>1.5 ×1025 yr (90% C.L.), which is the most stringent limit to date on this decay. Interpreting this result as a limit on the effective Majorana neutrino mass, we find mβ β<(110 -520 ) meV , where the range reflects the nuclear matrix element estimates employed.

  13. Prevalence of marijuana use does not differentially increase among youth after states pass medical marijuana laws: Commentary on and reanalysis of US National Survey on Drug Use in Households data 2002-2011.

    PubMed

    Wall, Melanie M; Mauro, Christine; Hasin, Deborah S; Keyes, Katherine M; Cerda, Magdalena; Martins, Silvia S; Feng, Tianshu

    2016-03-01

    There is considerable interest in the effects of medical marijuana laws (MML) on marijuana use in the USA, particularly among youth. The article by Stolzenberg et al. (2015) "The effect of medical cannabis laws on juvenile cannabis use" concludes that "implementation of medical cannabis laws increase juvenile cannabis use". This result is opposite to the findings of other studies that analysed the same US National Survey on Drug Use in Households data as well as opposite to studies analysing other national data which show no increase or even a decrease in youth marijuana use after the passage of MML. We provide a replication of the Stolzenberg et al. results and demonstrate how the comparison they are making is actually driven by differences between states with and without MML rather than being driven by pre and post-MML changes within states. We show that Stolzenberg et al. do not properly control for the fact that states that pass MML during 2002-2011 tend to already have higher past-month marijuana use before passing the MML in the first place. We further show that when within-state changes are properly considered and pre-MML prevalence is properly controlled, there is no evidence of a differential increase in past-month marijuana use in youth that can be attributed to state MML. Copyright © 2016. Published by Elsevier B.V.

  14. Decay Rate of the Nuclear Isomer Th 229 (3 /2+,7.8 eV ) in a Dielectric Sphere, Thin Film, and Metal Cavity

    NASA Astrophysics Data System (ADS)

    Tkalya, E. V.

    2018-03-01

    The main decay channels of the anomalous low-energy 3 /2+(7.8 ±0.5 eV ) isomeric level of the Th 229 nucleus, namely the γ emission and internal conversion, inside a dielectric sphere, dielectric thin film, and conducting spherical microcavity are investigated theoretically, taking into account the effect of media interfaces. It is shown that (1) the γ decay rate of the nuclear isomer inside a dielectric thin film and dielectric microsphere placed in a vacuum or in a metal cavity can decrease (increase) in dozen of times, (2) the γ activity of the distributed source as a function of time can be nonexponential, and (3) the metal cavity, whose size is of the order of the radiation wavelength, does not affect the probability of the internal conversion in Th 229 , because the virtual photon attenuates at much shorter distances and the reflected wave is very weak.

  15. Search for Neutrinoless Double-β Decay in Ge 76 with the Majorana Demonstrator

    NASA Astrophysics Data System (ADS)

    Aalseth, C. E.; Abgrall, N.; Aguayo, E.; Alvis, S. I.; Amman, M.; Arnquist, I. J.; Avignone, F. T.; Back, H. O.; Barabash, A. S.; Barbeau, P. S.; Barton, C. J.; Barton, P. J.; Bertrand, F. E.; Bode, T.; Bos, B.; Boswell, M.; Bradley, A. W.; Brodzinski, R. L.; Brudanin, V.; Busch, M.; Buuck, M.; Caldwell, A. S.; Caldwell, T. S.; Chan, Y.-D.; Christofferson, C. D.; Chu, P.-H.; Collar, J. I.; Combs, D. C.; Cooper, R. J.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Dunmore, J. A.; Efremenko, Yu.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Fu, Z.; Fujikawa, B. K.; Fuller, E.; Galindo-Uribarri, A.; Gehman, V. M.; Gilliss, T.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guinn, I. S.; Guiseppe, V. E.; Hallin, A. L.; Haufe, C. R.; Hehn, L.; Henning, R.; Hoppe, E. W.; Hossbach, T. W.; Howe, M. A.; Jasinski, B. R.; Johnson, R. A.; Keeter, K. J.; Kephart, J. D.; Kidd, M. F.; Knecht, A.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Lesko, K. T.; Leviner, L. E.; Loach, J. C.; Lopez, A. M.; Luke, P. N.; MacMullin, J.; MacMullin, S.; Marino, M. G.; Martin, R. D.; Massarczyk, R.; McDonald, A. B.; Mei, D.-M.; Meijer, S. J.; Merriman, J. H.; Mertens, S.; Miley, H. S.; Miller, M. L.; Myslik, J.; Orrell, J. L.; O'Shaughnessy, C.; Othman, G.; Overman, N. R.; Perumpilly, G.; Pettus, W.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Reeves, J. H.; Reine, A. L.; Rielage, K.; Robertson, R. G. H.; Ronquest, M. C.; Ruof, N. W.; Schubert, A. G.; Shanks, B.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Steele, D.; Suriano, A. M.; Tedeschi, D.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Yaver, H.; Young, A. R.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.; Zhu, B. X.; Zimmermann, S.; Majorana Collaboration

    2018-03-01

    The Majorana Collaboration is operating an array of high purity Ge detectors to search for neutrinoless double-β decay in Ge 76 . The Majorana Demonstrator comprises 44.1 kg of Ge detectors (29.7 kg enriched in Ge 76 ) split between two modules contained in a low background shield at the Sanford Underground Research Facility in Lead, South Dakota. Here we present results from data taken during construction, commissioning, and the start of full operations. We achieve unprecedented energy resolution of 2.5 keV FWHM at Qβ β and a very low background with no observed candidate events in 9.95 kg yr of enriched Ge exposure, resulting in a lower limit on the half-life of 1.9 ×1025 yr (90% C.L.). This result constrains the effective Majorana neutrino mass to below 240-520 meV, depending on the matrix elements used. In our experimental configuration with the lowest background, the background is 4.0-2.5+3.1 counts /(FWHM t yr ) .

  16. Ultralow energy calibration of LUX detector using Xe 127 electron capture

    NASA Astrophysics Data System (ADS)

    Akerib, D. S.; Alsum, S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Brás, P.; Byram, D.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; Dobi, A.; Druszkiewicz, E.; Edwards, B. N.; Fallon, S. R.; Fan, A.; Fiorucci, S.; Gaitskell, R. J.; Genovesi, J.; Ghag, C.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Jacobsen, R. G.; Ji, W.; Kamdin, K.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Palladino, K. J.; Pease, E. K.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solmaz, M.; Solovov, V. N.; Sorensen, P.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W. C.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Velan, V.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Xu, J.; Yazdani, K.; Young, S. K.; Zhang, C.

    2017-12-01

    We report an absolute calibration of the ionization yields (Qy ) and fluctuations for electronic recoil events in liquid xenon at discrete energies between 186 eV and 33.2 keV. The average electric field applied across the liquid xenon target is 180 V /cm . The data are obtained using low energy Xe 127 electron capture decay events from the 95.0-day first run from LUX (WS2013) in search of weakly interacting massive particles. The sequence of gamma-ray and x-ray cascades associated with I 127 deexcitations produces clearly identified two-vertex events in the LUX detector. We observe the K-(binding energy, 33.2 keV), L-(5.2 keV), M-(1.1 keV), and N-(186 eV) shell cascade events and verify that the relative ratio of observed events for each shell agrees with calculations. The N-shell cascade analysis includes single extracted electron (SE) events and represents the lowest-energy electronic recoil in situ measurements that have been explored in liquid xenon.

  17. State-Level Medical Marijuana Laws, Marijuana Use and Perceived Availability of Marijuana Among the General U.S. Population

    PubMed Central

    Martins, Silvia S.; Mauro, Christine M.; Santaella-Tenorio, Julian; Kim, June H.; Cerda, Magdalena; Keyes, Katherine M.; Hasin, Deborah S.; Galea, Sandro; Wall, Melanie

    2016-01-01

    Background Little is known on how perceived availability of marijuana is associated with medical marijuana laws. We examined the relationship between medical marijuana laws (MML) and the prevalence of past-month marijuana use, with perceived availability of marijuana. Methods Data were from respondents included in the National Survey of Drug Use and Health restricted use data portal 2004–2013. Multilevel logistic regression of individual- level data was used to test differences between MML and non-MML states and changes in prevalence of past-month marijuana use and perceived availability from before to after passage of MML among adolescents, young adults and older adults controlling for demographics. Results Among adults 26+, past-month prevalence of marijuana use increased from 5.87% to 7.15% after MML passage (Adjusted Odds Ratio (AOR): 1.24 [1.16–1.31]), but no change in prevalence of use was found for 12–17 or 18–25 year-olds. Perceived availability of marijuana increased after MML were enacted among those 26+ but not in younger groups. Among all age groups, prevalence of marijuana use and perception of it being easily available was higher in states that would eventually pass MML by 2013 compared to those that had not. Perceived availability was significantly associated with increased risk of past-month marijuana use in all age groups. Conclusion Evidence suggests perceived availability as a driver of change in use of marijuana due to MML. To date, this has only occurred in adults 26+ and different scenarios that could explain this change need to be further explored. PMID:27755989

  18. Medical marijuana laws and adolescent use of marijuana and other substances: Alcohol, cigarettes, prescription drugs, and other illicit drugs.

    PubMed

    Cerdá, Magdalena; Sarvet, Aaron L; Wall, Melanie; Feng, Tianshu; Keyes, Katherine M; Galea, Sandro; Hasin, Deborah S

    2018-02-01

    Historical shifts have taken place in the last twenty years in marijuana policy. The impact of medical marijuana laws (MML) on use of substances other than marijuana is not well understood. We examined the relationship between state MML and use of marijuana, cigarettes, illicit drugs, nonmedical use of prescription opioids, amphetamines, and tranquilizers, as well as binge drinking. Pre-post MML difference-in-difference analyses were performed on a nationally representative sample of adolescents in 48 contiguous U.S. states. Participants were 1,179,372U.S. 8th, 10th, and 12th graders in the national Monitoring the Future annual surveys conducted in 1991-2015. Measurements were any self-reported past-30-day use of marijuana, cigarettes, non-medical use of opioids, amphetamines and tranquilizers, other illicit substances, and any past-two-week binge drinking (5+ drinks per occasion). Among 8th graders, the prevalence of marijuana, binge drinking, cigarette use, non-medical use of opioids, amphetamines and tranquilizers, and any non-marijuana illicit drug use decreased after MML enactment (0.2-2.4% decrease; p-values:<0.0001-0.0293). Among 10th graders, the prevalence of substance use did not change after MML enactment (p-values: 0.177-0.938). Among 12th graders, non-medical prescription opioid and cigarette use increased after MML enactment (0.9-2.7% increase; p-values: <0.0001-0.0026). MML enactment is associated with decreases in marijuana and other drugs in early adolescence in those states. Mechanisms that explain the increase in non-medical prescription opioid and cigarette use among 12th graders following MML enactment deserve further study. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. State-level medical marijuana laws, marijuana use and perceived availability of marijuana among the general U.S. population.

    PubMed

    Martins, Silvia S; Mauro, Christine M; Santaella-Tenorio, Julian; Kim, June H; Cerda, Magdalena; Keyes, Katherine M; Hasin, Deborah S; Galea, Sandro; Wall, Melanie

    2016-12-01

    Little is known on how perceived availability of marijuana is associated with medical marijuana laws. We examined the relationship between medical marijuana laws (MML) and the prevalence of past-month marijuana use, with perceived availability of marijuana. Data were from respondents included in the National Survey of Drug Use and Health restricted use data portal 2004-2013. Multilevel logistic regression of individual-level data was used to test differences between MML and non-MML states and changes in prevalence of past-month marijuana use and perceived availability from before to after passage of MML among adolescents, young adults and older adults controlling for demographics. Among adults 26+, past-month prevalence of marijuana use increased from 5.87% to 7.15% after MML passage (Adjusted Odds Ratio (AOR): 1.24 [1.16-1.31]), but no change in prevalence of use was found for 12-17 or 18-25 year-olds. Perceived availability of marijuana increased after MML was enacted among those 26+ but not in younger groups. Among all age groups, prevalence of marijuana use and perception of it being easily available was higher in states that would eventually pass MML by 2013 compared to those that had not. Perceived availability was significantly associated with increased risk of past-month marijuana use in all age groups. Evidence suggests perceived availability as a driver of change in use of marijuana due to MML. To date, this has only occurred in adults 26+ and different scenarios that could explain this change need to be further explored. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Optimizing EDELWEISS detectors for low-mass WIMP searches

    NASA Astrophysics Data System (ADS)

    Arnaud, Q.; Armengaud, E.; Augier, C.; Benoît, A.; Bergé, L.; Billard, J.; Broniatowski, A.; Camus, P.; Cazes, A.; Chapellier, M.; Charlieux, F.; de Jésus, M.; Dumoulin, L.; Eitel, K.; Foerster, N.; Gascon, J.; Giuliani, A.; Gros, M.; Hehn, L.; Jin, Y.; Juillard, A.; Kleifges, M.; Kozlov, V.; Kraus, H.; Kudryavtsev, V. A.; Le-Sueur, H.; Maisonobe, R.; Marnieros, S.; Navick, X.-F.; Nones, C.; Olivieri, E.; Pari, P.; Paul, B.; Poda, D.; Queguiner, E.; Rozov, S.; Sanglard, V.; Scorza, S.; Siebenborn, B.; Vagneron, L.; Weber, M.; Yakushev, E.; EDELWEISS Collaboration

    2018-01-01

    The physics potential of EDELWEISS detectors for the search of low-mass weakly interacting massive particles (WIMPs) is studied. Using a data-driven background model, projected exclusion limits are computed using frequentist and multivariate analysis approaches, namely, profile likelihood and boosted decision tree. Both current and achievable experimental performances are considered. The optimal strategy for detector optimization depends critically on whether the emphasis is put on WIMP masses below or above ˜5 GeV /c2 . The projected sensitivity for the next phase of the EDELWEISS-III experiment at the Modane Underground Laboratory (LSM) for low-mass WIMP search is presented. By 2018 an upper limit on the spin-independent WIMP-nucleon cross section of σSI=7 ×10-42 cm2 is expected for a WIMP mass in the range 2 - 5 GeV /c2 . The requirements for a future hundred-kilogram-scale experiment designed to reach the bounds imposed by the coherent scattering of solar neutrinos are also described. By improving the ionization resolution down to 50 eVe e , we show that such an experiment installed in an even lower background environment (e.g., at SNOLAB) together with an exposure of 1 000 kg .yr , should allow us to observe about 80 B 8 neutrino events after discrimination.

  1. Search for Neutrinoless Quadruple-β Decay of Nd 150 with the NEMO-3 Detector

    NASA Astrophysics Data System (ADS)

    Arnold, R.; Augier, C.; Barabash, A. S.; Basharina-Freshville, A.; Blondel, S.; Blot, S.; Bongrand, M.; Boursette, D.; Brudanin, V.; Busto, J.; Caffrey, A. J.; Calvez, S.; Cascella, M.; Cerna, C.; Cesar, J. P.; Chapon, A.; Chauveau, E.; Chopra, A.; Dawson, L.; Duchesneau, D.; Durand, D.; Egorov, V.; Eurin, G.; Evans, J. J.; Fajt, L.; Filosofov, D.; Flack, R.; Garrido, X.; Gómez, H.; Guillon, B.; Guzowski, P.; Hodák, R.; Huber, A.; Hubert, P.; Hugon, C.; Jullian, S.; Klimenko, A.; Kochetov, O.; Konovalov, S. I.; Kovalenko, V.; Lalanne, D.; Lang, K.; Lemière, Y.; Le Noblet, T.; Liptak, Z.; Liu, X. R.; Loaiza, P.; Lutter, G.; Macko, M.; Macolino, C.; Mamedov, F.; Marquet, C.; Mauger, F.; Morgan, B.; Mott, J.; Nemchenok, I.; Nomachi, M.; Nova, F.; Nowacki, F.; Ohsumi, H.; Patrick, C.; Pahlka, R. B.; Perrot, F.; Piquemal, F.; Povinec, P.; Přidal, P.; Ramachers, Y. A.; Remoto, A.; Reyss, J. L.; Riddle, C. L.; Rukhadze, E.; Saakyan, R.; Salazar, R.; Sarazin, X.; Shitov, Yu.; Simard, L.; Šimkovic, F.; Smetana, A.; Smolek, K.; Smolnikov, A.; Söldner-Rembold, S.; Soulé, B.; Štefánik, D.; Štekl, I.; Suhonen, J.; Sutton, C. S.; Szklarz, G.; Thomas, J.; Timkin, V.; Torre, S.; Tretyak, Vl. I.; Tretyak, V. I.; Umatov, V. I.; Vanushin, I.; Vilela, C.; Vorobel, V.; Waters, D.; Xie, F.; Žukauskas, A.; NEMO-3 Collaboration

    2017-07-01

    We report the results of a first experimental search for lepton number violation by four units in the neutrinoless quadruple-β decay of Nd 150 using a total exposure of 0.19 kg yr recorded with the NEMO-3 detector at the Modane Underground Laboratory. We find no evidence of this decay and set lower limits on the half-life in the range T1 /2>(1.1 - 3.2 )×1 021 yr at the 90% C.L., depending on the model used for the kinematic distributions of the emitted electrons.

  2. Small interfering RNAs from bidirectional transcripts of GhMML3_A12 regulate cotton fiber development.

    PubMed

    Wan, Qun; Guan, Xueying; Yang, Nannan; Wu, Huaitong; Pan, Mengqiao; Liu, Bingliang; Fang, Lei; Yang, Shouping; Hu, Yan; Ye, Wenxue; Zhang, Hua; Ma, Peiyong; Chen, Jiedan; Wang, Qiong; Mei, Gaofu; Cai, Caiping; Yang, Donglei; Wang, Jiawei; Guo, Wangzhen; Zhang, Wenhua; Chen, Xiaoya; Zhang, Tianzhen

    2016-06-01

    Natural antisense transcripts (NATs) are commonly observed in eukaryotic genomes, but only a limited number of such genes have been identified as being involved in gene regulation in plants. In this research, we investigated the function of small RNA derived from a NAT in fiber cell development. Using a map-based cloning strategy for the first time in tetraploid cotton, we cloned a naked seed mutant gene (N1 ) encoding a MYBMIXTA-like transcription factor 3 (MML3)/GhMYB25-like in chromosome A12, GhMML3_A12, that is associated with fuzz fiber development. The extremely low expression of GhMML3_A12 in N1 is associated with NAT production, driven by its 3' antisense promoter, as indicated by the promoter-driven histochemical staining assay. In addition, small RNA deep sequencing analysis suggested that the bidirectional transcriptions of GhMML3_A12 form double-stranded RNAs and generate 21-22 nt small RNAs. Therefore, in a fiber-specific manner, small RNA derived from the GhMML3_A12 locus can mediate GhMML3_A12 mRNA self-cleavage and result in the production of naked seeds followed by lint fiber inhibition in N1 plants. The present research reports the first observation of gene-mediated NATs and siRNA directly controlling fiber development in cotton. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  3. A comparison of 2 cesarean section methods, modified Misgav-Ladach and Pfannenstiel-Kerr: A randomized controlled study.

    PubMed

    Şahin, Nur; Genc, Mine; Turan, Gülüzar Arzu; Kasap, Esin; Güçlü, Serkan

    2018-03-13

    The modified Misgav-Ladach method (MML) is a minimally invasive cesarean section procedure compared with the classic Pfannenstiel-Kerr (PK) method. The aim of the study was to compare the MML method and the PK method in terms of intraoperative and short-term postoperative outcomes. This prospective, randomized controlled trial involved 252 pregnant women scheduled for primary emergency or elective cesarean section between October, 2014 and July, 2015. The primary outcome measures were the duration of surgery, extraction time, Apgar score, blood loss, wound complications, and number of sutures used. Secondary outcome measures were the wound infection, time of bowel restitution, visual analogue scale (VAS) scores at 6 h and 24 h after the operation, limitations in movement, and analgesic requirements. At 6 weeks after surgery, the patients were evaluated regarding late complications. There was a significant reduction in total operating and extraction time in the MML group (p < 0.001). Limitations in movement were lower at 24 h after the MML operation, and less analgesic was required in the MML group. There was no difference between the 2 groups in terms of febrile morbidity or the duration of hospitalization. At 6 weeks after the operation, no complaints and no additional complications from the surgery were noted. The MML method is a minimally invasive cesarean section. In the future, as surgeons' experience increases, MML will likely be chosen more often than the classic PK method.

  4. How does state marijuana policy affect U.S. youth? Medical marijuana laws, marijuana use and perceived harmfulness: 1991–2014

    PubMed Central

    Keyes, Katherine M.; Wall, Melanie; Cerdá, Magdalena; Schulenberg, John; O’Malley, Patrick M.; Galea, Sandro; Feng, Tianshu; Hasin, Deborah S.

    2016-01-01

    Aims To test, among US students: 1) whether perceived harmfulness of marijuana has changed over time, 2) whether perceived harmfulness of marijuana changed post-passage of state medical marijuana laws (MML) compared with pre-passage; 3) whether perceived harmfulness of marijuana mediates and/or modifies the relation between MML and marijuana use as a function of grade level. Design Cross-sectional nationally-representative surveys of U.S. students, conducted annually, 1991–2014, in the Monitoring The Future study. Setting Surveys conducted in schools in all coterminous states; 21 states passed MML between 1996–2014. Participants The sample included 1,134,734 adolescents in 8th, 10th, and 12th grades. Measures State passage of MML; perceived harmfulness of marijuana use (perceiving great or moderate risk to health from smoking marijuana occasionally versus slight or no risk); and marijuana use (prior 30 days). Data were analyzed using time-varying multi-level regression modeling. Findings Perceived harmfulness of marijuana significantly decreased since 1991 (from an estimated 84.0% in 1991 to 53.8% in 2014, p<0.01). Across time, perceived harmfulness was lower in states that passed MML (OR=0.86, 95% C.I. 0.75–0.97). In states with MML, perceived harmfulness of marijuana increased among 8th graders after MML passage (OR=1.21, 95% C.I. 1.08–1.36), while marijuana use decreased (OR=0.81, 95% C.I. 0.72–0.92). Results were null for other grades, and for all grades combined. Increases in perceived harmfulness among 8th graders after MML passage was associated with ~33% of the decrease in use. When adolescents were stratified by perceived harmfulness, use in 8th graders decreased to a greater extent among those who perceived marijuana as harmful. Conclusions While perceived harmfulness of marijuana use is decreasing nationally among adolescents, passage of medical marijuana laws is associated with increases in perceived harmfulness among young adolescents, and marijuana use decreased among those who perceive marijuana to be harmful after passage of MML. PMID:27393902

  5. Precision Measurement of the β Asymmetry in Spin-Polarized K 37 Decay

    NASA Astrophysics Data System (ADS)

    Fenker, B.; Gorelov, A.; Melconian, D.; Behr, J. A.; Anholm, M.; Ashery, D.; Behling, R. S.; Cohen, I.; Craiciu, I.; Gwinner, G.; McNeil, J.; Mehlman, M.; Olchanski, K.; Shidling, P. D.; Smale, S.; Warner, C. L.

    2018-02-01

    Using Triumf's neutral atom trap, Trinat, for nuclear β decay, we have measured the β asymmetry with respect to the initial nuclear spin in K 37 to be Aβ=-0.5707 (13) syst(13) stat(5) pol , a 0.3% measurement. This is the best relative accuracy of any β -asymmetry measurement in a nucleus or the neutron, and is in agreement with the standard model prediction -0.5706 (7 ). We compare constraints on physics beyond the standard model with other β -decay measurements, and improve the value of Vud measured in this mirror nucleus by a factor of 4.

  6. Anomalous Nernst and Hall effects in magnetized platinum and palladium

    NASA Astrophysics Data System (ADS)

    Guo, G. Y.; Niu, Q.; Nagaosa, N.

    2014-06-01

    We study the anomalous Nernst effect (ANE) and anomalous Hall effect (AHE) in proximity-induced ferromagnetic palladium and platinum which is widely used in spintronics, within the Berry phase formalism based on the relativistic band-structure calculations. We find that both the anomalous Hall (σxyA) and Nernst (αxyA) conductivities can be related to the spin Hall conductivity (σxyS) and band exchange splitting (Δex) by relations σxyA=ΔexeℏσxyS(EF)' and αxyA=-π23kB2TΔexℏσxys(μ )'', respectively. In particular, these relations would predict that the σxyA in the magnetized Pt (Pd) would be positive (negative) since the σxyS(EF)' is positive (negative). Furthermore, both σxyA and αxyA are approximately proportional to the induced spin magnetic moment (ms) because the Δex is a linear function of ms. Using the reported ms in the magnetized Pt and Pd, we predict that the intrinsic anomalous Nernst conductivity (ANC) in the magnetic platinum and palladium would be gigantic, being up to ten times larger than, e.g., iron, while the intrinsic anomalous Hall conductivity (AHC) would also be significant.

  7. Environmental Management Model for Road Maintenance Operation Involving Community Participation

    NASA Astrophysics Data System (ADS)

    Triyono, A. R. H.; Setyawan, A.; Sobriyah; Setiono, P.

    2017-07-01

    Public expectations of Central Java, which is very high on demand fulfillment, especially road infrastructure as outlined in the number of complaints and community expectations tweeter, Short Mail Massage (SMS), e-mail and public reports from various media, Highways Department of Central Java province requires development model of environmental management in the implementation of a routine way by involving the community in order to fulfill the conditions of a representative, may serve road users safely and comfortably. This study used survey method with SEM analysis and SWOT with Latent Independent Variable (X), namely; Public Participation in the regulation, development, construction and supervision of road (PSM); Public behavior in the utilization of the road (PMJ) Provincial Road Service (PJP); Safety in the Provincial Road (KJP); Integrated Management System (SMT) and latent dependent variable (Y) routine maintenance of the provincial road that is integrated with the environmental management system and involve the participation of the community (MML). The result showed the implementation of routine maintenance of road conditions in Central Java province has yet to implement an environmental management by involving the community; Therefore developed environmental management model with the results of H1: Community Participation (PSM) has positive influence on the Model of Environmental Management (MML); H2: Behavior Society in Jalan Utilization (PMJ) positive effect on Model Environmental Management (MML); H3: Provincial Road Service (PJP) positive effect on Model Environmental Management (MML); H4: Safety in the Provincial Road (KJP) positive effect on Model Environmental Management (MML); H5: Integrated Management System (SMT) has positive influence on the Model of Environmental Management (MML). From the analysis obtained formulation model describing the relationship / influence of the independent variables PSM, PMJ, PJP, KJP, and SMT on the dependent variable MML as follows: MML = 0.13 + 0.07 PSM PJP PMJ + 0.09 + 0.19 + 0.48 KJP SMT + e

  8. Compensating effect of minor portal hypertension on the muscle mass loss-related poor prognosis in cirrhosis.

    PubMed

    Maruyama, Hitoshi; Kobayashi, Kazufumi; Kiyono, Soichiro; Ogasawara, Sadahisa; Suzuki, Eichiro; Ooka, Yoshihiko; Chiba, Tetsuhiro; Yamaguchi, Tadashi

    2017-01-01

    Background: To examine the influence of the severity of portal hemodynamic abnormality on the prognosis of cirrhosis with respect to the muscle mass loss (MML). Methods: The study involved a subgroup analysis in 98 cirrhosis patients (63.5 ± 11.8 years) who prospectively underwent both Doppler ultrasound and hepatic venous catheterization. The prognostic influence of MML diagnosed by computed tomography using the L3 skeletal muscle index was evaluated (median observation period, 32.7 months). Results: The cumulative survival rate showed difference between patients with MML (n = 34; 82.2%/1year, 41.2%/3years and 36.1%/5years) and those without (n = 64; 92.1%/1year, 74.9%/3years and 69.4%/5years; P = 0.005). When divided with respect to the portal velocity, the survival rate showed differences between patients with and without MML in the cohort < 12.8 cm/s (n=52, p=0.009) and ≥ 12.8 cm/s (n=44, p=0.041). The survival rate also showed differences between patients with MML (n = 24; 78.8%/1year, 40.6%/3years and 34.8%/5years) and those without (n = 45; 91.1%/1year, 71.3%/3years and 63.1%/5years; P = 0.008) in the cohort with hepatic venous pressure gradient (HVPG) > 12 mmHg. However, in the cohort with HVPG ≤ 12 mmHg, survival rate showed no difference between patients with MML (n=10; 100%/1year, 61.9%/3years and 61.9%/5years) and those without (n=19; 93.8%/1year, 71.2%/3years and 59.4%/5years; p = 0.493) Conclusion: Lower HVPG has a compensating effect on the MML-induced poor prognosis of cirrhosis. Care should be taken in the evaluation of the influence of MML in consideration of the severity of portal hypertension.

  9. Explicit dosimetry for 2-(1-hexyloxyethyl)-2-devinyl pyropheophorbide-a-mediated photodynamic therapy: macroscopic singlet oxygen modeling

    NASA Astrophysics Data System (ADS)

    Penjweini, Rozhin; Liu, Baochang; Kim, Michele M.; Zhu, Timothy C.

    2015-12-01

    Type II photodynamic therapy (PDT) is based on the photochemical reactions mediated through an interaction between a photosensitizer, ground-state oxygen ([O]), and light excitation at an appropriate wavelength, which results in production of reactive singlet oxygen ([]rx). We use an empirical macroscopic model based on four photochemical parameters for the calculation of []rx threshold concentration ([]rx,sh) causing tissue necrosis in tumors after PDT. For this reason, 2-(1-hexyloxyethyl)-2-devinyl pyropheophorbide-a (HPPH)-mediated PDT was performed interstitially on mice with radiation-induced fibrosarcoma (RIF) tumors. A linear light source at 665 nm with total energy released per unit length of 12 to 100 J/cm and source power per unit length (LS) of 12 to 150 mW/cm was used to induce different radii of necrosis. Then the amount of []rx calculated by the macroscopic model incorporating explicit PDT dosimetry of light fluence distribution, tissue optical properties, and HPPH concentration was correlated to the necrotic radius to obtain the model parameters and []rx,sh. We provide evidence that []rx is a better dosimetric quantity for predicting the treatment outcome than PDT dose, which is proportional to the time integral of the products of the photosensitizer concentration and light fluence rate.

  10. Mondo complexes regulate TFEB via TOR inhibition to promote longevity in response to gonadal signals

    PubMed Central

    Nakamura, Shuhei; Karalay, Özlem; Jäger, Philipp S.; Horikawa, Makoto; Klein, Corinna; Nakamura, Kayo; Latza, Christian; Templer, Sven E.; Dieterich, Christoph; Antebi, Adam

    2016-01-01

    Germline removal provokes longevity in several species and shifts resources towards survival and repair. Several Caenorhabditis elegans transcription factors regulate longevity arising from germline removal; yet, how they work together is unknown. Here we identify a Myc-like HLH transcription factor network comprised of Mondo/Max-like complex (MML-1/MXL-2) to be required for longevity induced by germline removal, as well as by reduced TOR, insulin/IGF signalling and mitochondrial function. Germline removal increases MML-1 nuclear accumulation and activity. Surprisingly, MML-1 regulates nuclear localization and activity of HLH-30/TFEB, a convergent regulator of autophagy, lysosome biogenesis and longevity, by downregulating TOR signalling via LARS-1/leucyl-transfer RNA synthase. HLH-30 also upregulates MML-1 upon germline removal. Mammalian MondoA/B and TFEB show similar mutual regulation. MML-1/MXL-2 and HLH-30 transcriptomes show both shared and preferential outputs including MDL-1/MAD-like HLH factor required for longevity. These studies reveal how an extensive interdependent HLH transcription factor network distributes responsibility and mutually enforces states geared towards reproduction or survival. PMID:27001890

  11. Evidence for Spin Singlet Pairing with Strong Uniaxial Anisotropy in URu2Si2 Using Nuclear Magnetic Resonance

    NASA Astrophysics Data System (ADS)

    Hattori, T.; Sakai, H.; Tokunaga, Y.; Kambe, S.; Matsuda, T. D.; Haga, Y.

    2018-01-01

    In order to identify the spin contribution to superconducting pairing compatible with the so-called "hidden order", Si 29 nuclear magnetic resonance measurements have been performed using a high-quality single crystal of URu2 Si2 . A clear reduction of the Si 29 Knight shift in the superconducting state has been observed under a magnetic field applied along the crystalline c axis, corresponding to the magnetic easy axis. These results provide direct evidence for the formation of spin-singlet Cooper pairs. Consequently, results indicating a very tiny change of the in-plane Knight shift reported previously demonstrate extreme uniaxial anisotropy for the spin susceptibility in the hidden order state.

  12. Applying physiological principles and assessment techniques to swimming the English Channel. A case study.

    PubMed

    Acevedo, E O; Meyers, M C; Hayman, M; Haskin, J

    1997-03-01

    This study presents the use of physiological principles and assessment techniques in addressing four objectives that can enhance a swimmer's likelihood of successfully swimming the English Channel. The four objective were: (1) to prescribe training intensities and determine ideal swimming pace; (2) to determine the amount of insulation needed, relative to heat produced, to diminish the likelihood of the swimmer suffering from hypothermia; (3) to calculate the caloric expenditure for the swim and the necessary glucose replacement required to prevent glycogen depletion; and (4) to determine the rate of acclimatization to cold water (15.56 C/60 F). The subject participated in several pool swimming data collection sessions including a tethered swim incremental protocol to determine peak oxygen consumption and onset of lactate accumulation and several steady state swims to determine ideal swimming pace at 4.0 mM/L of lactate. Additionally, these swims provided information on oxygen consumption, which in combination with ultrasound assessment of subcutaneous fat was used to assess heat production and insulation capabilities. Finally, the subject participated in 18 cold water immersions to document acclimatization rate. The data demonstrated the high fitness level of this subject and indicated that at a stroke rate of 63 stokes/min, HR was 130 heats/min and lactate was 4 mM/L. At this swimming pace the swimmer would need to consume 470 kcal of glucose/hr. In addition, the energy produced at this swim pace was 13.25 kcal/min while the energy lost at the present subcutaneous fat quantity was 13.40 kcal/min, requiring a fat weight gain of 6,363.03 g (13.88 lbs) to resist heat loss. Finally, the data from the cold water immersions suggested that acclimatization occurred following two weeks of immersions. There results were provided to the swimmer and utilized in making decisions in preparation for the swim.

  13. A comparative study of the treatment of ethylene plant spent caustic by neutralization and classical and advanced oxidation.

    PubMed

    Hawari, Alaa; Ramadan, Hasanat; Abu-Reesh, Ibrahim; Ouederni, Mabrouk

    2015-03-15

    The treatment of spent caustic produced from an ethylene plant was investigated. In the case of neutralization alone it was found that the maximum removal of sulfide was at pH values below 5.5. The higher percentage removal of sulfides (99% at pH = 1.5) was accompanied with the highest COD removal (88%). For classical oxidation using H2O2 the maximum COD removal percentage reached 89% at pH = 2.5 and at a hydrogen peroxide concentration of 19 mM/L. For the advanced oxidation using Fenton's process it was found that the maximum COD removal of 96.5% was achieved at a hydrogen peroxide/ferrous sulfate ratio of (7:1). Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Determination of the direct double-β -decay Q value of 96Zr and atomic masses of Zr 90 -92 ,94 ,96 and Mo 92 ,94 -98 ,100

    NASA Astrophysics Data System (ADS)

    Gulyuz, K.; Ariche, J.; Bollen, G.; Bustabad, S.; Eibach, M.; Izzo, C.; Novario, S. J.; Redshaw, M.; Ringle, R.; Sandler, R.; Schwarz, S.; Valverde, A. A.

    2015-05-01

    Experimental searches for neutrinoless double-β decay offer one of the best opportunities to look for physics beyond the standard model. Detecting this decay would confirm the Majorana nature of the neutrino, and a measurement of its half-life can be used to determine the absolute neutrino mass scale. Important to both tasks is an accurate knowledge of the Q value of the double-β decay. The LEBIT Penning trap mass spectrometer was used for the first direct experimental determination of the 96Zr double-β decay Q value: Qβ β=3355.85 (15 ) keV. This value is nearly 7 keV larger than the 2012 Atomic Mass Evaluation [M. Wang et al., Chin. Phys. C 36, 1603 (2012), 10.1088/1674-1137/36/12/003] value and one order of magnitude more precise. The 3-σ shift is primarily due to a more accurate measurement of the 96Zr atomic mass: m (96Zr ) =95.908 277 35 (17 ) u. Using the new Q value, the 2 ν β β -decay matrix element, | M2 ν| , is calculated. Improved determinations of the atomic masses of all other zirconium (Zr 90 -92 ,94 ,96 ) and molybdenum (92 ,94 -98 ,100Mo ) isotopes using both 12C8 and 87Rb as references are also reported.

  15. Fasting plasma glucose cutoff value for the prediction of future diabetes development: a study of middle-aged Koreans in a health promotion center.

    PubMed

    Kim, Dong-Jun; Cho, Nam-Han; Noh, Jung-Hyun; Kim, Hyun-Jin; Choi, Yoon-Ho; Jung, Jae-Hoon; Min, Yong-Ki; Lee, Myung-Shik; Lee, Moon-Kyu; Kim, Kwang-Won

    2005-08-01

    We determined optimal fasting plasma glucose (FPG) cutoff values predictive of future diabetes development in a group of middle-aged Koreans who visited a health promotion center. The medical records of 2,964 subjects, who attended the Health Promotion Center in 1998 and 2003, were examined. Subjects were classified into four groups according to their baseline FPG values (Group 1:FPG <5.0 mM/L; Group 2: 5.0< or =FPG <5.6 mM/L; Group 3: 5.6< or =FPG <6.1 mM/L; Group 4: 6.1< or =FPG <7.0 mM/L). No significant difference was observed between Group 1 and Group 2 in terms of diabetes incidence. However, incidence in Group 3 was significantly higher than that in Group 1 [hazards ratio 4.88 (1.65-14.41), p=0.004] and the hazards ratio in Group 4 for diabetes was 36.91 (13.11-103.61), p<0.001, versus Group 1. Receiver operator characteristics curve analysis showed that an FPG of 5.97 mM/L represents the lower limit and gives the best combination of sensitivity and specificity. Our data shows that the risk of future diabetes development started to increase below an FPG of 6.1 mM/L and suggests the importance of efforts to modify diabetes development risk factors at lower impaired fasting glucose levels.

  16. Probing Sizes and Shapes of Nobelium Isotopes by Laser Spectroscopy

    NASA Astrophysics Data System (ADS)

    Raeder, S.; Ackermann, D.; Backe, H.; Beerwerth, R.; Berengut, J. C.; Block, M.; Borschevsky, A.; Cheal, B.; Chhetri, P.; Düllmann, Ch. E.; Dzuba, V. A.; Eliav, E.; Even, J.; Ferrer, R.; Flambaum, V. V.; Fritzsche, S.; Giacoppo, F.; Götz, S.; Heßberger, F. P.; Huyse, M.; Kaldor, U.; Kaleja, O.; Khuyagbaatar, J.; Kunz, P.; Laatiaoui, M.; Lautenschläger, F.; Lauth, W.; Mistry, A. K.; Minaya Ramirez, E.; Nazarewicz, W.; Porsev, S. G.; Safronova, M. S.; Safronova, U. I.; Schuetrumpf, B.; Van Duppen, P.; Walther, T.; Wraith, C.; Yakushev, A.

    2018-06-01

    Until recently, ground-state nuclear moments of the heaviest nuclei could only be inferred from nuclear spectroscopy, where model assumptions are required. Laser spectroscopy in combination with modern atomic structure calculations is now able to probe these moments directly, in a comprehensive and nuclear-model-independent way. Here we report on unique access to the differential mean-square charge radii of No 252 ,253 ,254 , and therefore to changes in nuclear size and shape. State-of-the-art nuclear density functional calculations describe well the changes in nuclear charge radii in the region of the heavy actinides, indicating an appreciable central depression in the deformed proton density distribution in No,254252 isotopes. Finally, the hyperfine splitting of No 253 was evaluated, enabling a complementary measure of its (quadrupole) deformation, as well as an insight into the neutron single-particle wave function via the nuclear spin and magnetic moment.

  17. Analysis of the Daya Bay Reactor Antineutrino Flux Changes with Fuel Burnup

    NASA Astrophysics Data System (ADS)

    Hayes, A. C.; Jungman, Gerard; McCutchan, E. A.; Sonzogni, A. A.; Garvey, G. T.; Wang, X. B.

    2018-01-01

    We investigate the recent Daya Bay results on the changes in the antineutrino flux and spectrum with the burnup of the reactor fuel. We find that the discrepancy between current model predictions and the Daya Bay results can be traced to the original measured U 235 /Pu 239 ratio of the fission β spectra that were used as a base for the expected antineutrino fluxes. An analysis of the antineutrino spectra that is based on a summation over all fission fragment β decays, using nuclear database input, explains all of the features seen in the Daya Bay evolution data. However, this summation method still allows for an anomaly. We conclude that there is currently not enough information to use the antineutrino flux changes to rule out the possible existence of sterile neutrinos.

  18. Dissipation-Induced Anomalous Multicritical Phenomena

    NASA Astrophysics Data System (ADS)

    Soriente, M.; Donner, T.; Chitra, R.; Zilberberg, O.

    2018-05-01

    We explore the influence of dissipation on a paradigmatic driven-dissipative model where a collection of two level atoms interact with both quadratures of a quantum cavity mode. The closed system exhibits multiple phase transitions involving discrete and continuous symmetries breaking and all phases culminate in a multicritical point. In the open system, we show that infinitesimal dissipation erases the phase with broken continuous symmetry and radically alters the model's phase diagram. The multicritical point now becomes brittle and splits into two tricritical points where first- and second-order symmetry-breaking transitions meet. A quantum fluctuations analysis shows that, surprisingly, the tricritical points exhibit anomalous finite fluctuations, as opposed to standard tricritical points arising in He 3 -He 4 mixtures. Our work has direct implications for a variety of fields, including cold atoms and ions in optical cavities, circuit-quantum electrodynamics as well as optomechanical systems.

  19. Photodynamic therapy monitoring with optical coherence angiography

    NASA Astrophysics Data System (ADS)

    Sirotkina, M. A.; Matveev, L. A.; Shirmanova, M. V.; Zaitsev, V. Y.; Buyanova, N. L.; Elagin, V. V.; Gelikonov, G. V.; Kuznetsov, S. S.; Kiseleva, E. B.; Moiseev, A. A.; Gamayunov, S. V.; Zagaynova, E. V.; Feldchtein, F. I.; Vitkin, A.; Gladkova, N. D.

    2017-02-01

    Photodynamic therapy (PDT) is a promising modern approach for cancer therapy with low normal tissue toxicity. This study was focused on a vascular-targeting Chlorine E6 mediated PDT. A new angiographic imaging approach known as M-mode-like optical coherence angiography (MML-OCA) was able to sensitively detect PDT-induced microvascular alterations in the mouse ear tumour model CT26. Histological analysis showed that the main mechanisms of vascular PDT was thrombosis of blood vessels and hemorrhage, which agrees with angiographic imaging by MML-OCA. Relationship between MML-OCA-detected early microvascular damage post PDT (within 24 hours) and tumour regression/regrowth was confirmed by histology. The advantages of MML-OCA such as direct image acquisition, fast processing, robust and affordable system opto-electronics, and label-free high contrast 3D visualization of the microvasculature suggest attractive possibilities of this method in practical clinical monitoring of cancer therapies with microvascular involvement.

  20. High-resolution two-photon spectroscopy of a 5 p56 p ←5 p6 transition of xenon

    NASA Astrophysics Data System (ADS)

    Altiere, Emily; Miller, Eric R.; Hayamizu, Tomohiro; Jones, David J.; Madison, Kirk W.; Momose, Takamasa

    2018-01-01

    We report high-resolution Doppler-free two-photon excitation spectroscopy of Xe from the ground state to the 5 p5(P 3 /2 2 ) 6 p [3 /2 ] 2 2 electronic excited state. This is a first step to developing a comagnetometer using polarized 129Xe atoms for planned neutron electric dipole moment measurements at TRIUMF. Narrow linewidth radiation at 252.5 nm produced by a continuous wave laser was built up in an optical cavity to excite the two-photon transition, and the near-infrared emission from the 5 p56 p excited state to the 5 p56 s intermediate electronic state was used to detect the two-photon transition. Hyperfine constants and isotope shift parameters were evaluated and compared with previously reported values. In addition, the detected photon count rate was estimated from the observed intensities.

  1. Psychophysics of the probability weighting function

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki

    2011-03-01

    A probability weighting function w(p) for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functions, psychophysical foundations of the probability weighting functions have been unknown. Notably, a behavioral economist Prelec (1998) [4] axiomatically derived the probability weighting function w(p)=exp(-() (0<α<1 and w(0)=1,w(1e)=1e,w(1)=1), which has extensively been studied in behavioral neuroeconomics. The present study utilizes psychophysical theory to derive Prelec's probability weighting function from psychophysical laws of perceived waiting time in probabilistic choices. Also, the relations between the parameters in the probability weighting function and the probability discounting function in behavioral psychology are derived. Future directions in the application of the psychophysical theory of the probability weighting function in econophysics and neuroeconomics are discussed.

  2. The effect of medical marijuana laws on crime: evidence from state panel data, 1990-2006.

    PubMed

    Morris, Robert G; TenEyck, Michael; Barnes, J C; Kovandzic, Tomislav V

    2014-01-01

    Debate has surrounded the legalization of marijuana for medical purposes for decades. Some have argued medical marijuana legalization (MML) poses a threat to public health and safety, perhaps also affecting crime rates. In recent years, some U.S. states have legalized marijuana for medical purposes, reigniting political and public interest in the impact of marijuana legalization on a range of outcomes. Relying on U.S. state panel data, we analyzed the association between state MML and state crime rates for all Part I offenses collected by the FBI. Results did not indicate a crime exacerbating effect of MML on any of the Part I offenses. Alternatively, state MML may be correlated with a reduction in homicide and assault rates, net of other covariates. These findings run counter to arguments suggesting the legalization of marijuana for medical purposes poses a danger to public health in terms of exposure to violent crime and property crimes.

  3. The Effect of Medical Marijuana Laws on Crime: Evidence from State Panel Data, 1990-2006

    PubMed Central

    Morris, Robert G.; TenEyck, Michael; Barnes, J. C.; Kovandzic, Tomislav V.

    2014-01-01

    Background Debate has surrounded the legalization of marijuana for medical purposes for decades. Some have argued medical marijuana legalization (MML) poses a threat to public health and safety, perhaps also affecting crime rates. In recent years, some U.S. states have legalized marijuana for medical purposes, reigniting political and public interest in the impact of marijuana legalization on a range of outcomes. Methods Relying on U.S. state panel data, we analyzed the association between state MML and state crime rates for all Part I offenses collected by the FBI. Findings Results did not indicate a crime exacerbating effect of MML on any of the Part I offenses. Alternatively, state MML may be correlated with a reduction in homicide and assault rates, net of other covariates. Conclusions These findings run counter to arguments suggesting the legalization of marijuana for medical purposes poses a danger to public health in terms of exposure to violent crime and property crimes. PMID:24671103

  4. Realizing Fulde-Ferrell Superfluids via a Dark-State Control of Feshbach Resonances

    NASA Astrophysics Data System (ADS)

    He, Lianyi; Hu, Hui; Liu, Xia-Ji

    2018-01-01

    We propose that the long-sought Fulde-Ferrell superfluidity with nonzero momentum pairing can be realized in ultracold two-component Fermi gases of K 40 or Li 6 atoms by optically tuning their magnetic Feshbach resonances via the creation of a closed-channel dark state with a Doppler-shifted Stark effect. In this scheme, two counterpropagating optical fields are applied to couple two molecular states in the closed channel to an excited molecular state, leading to a significant violation of Galilean invariance in the dark-state regime and hence to the possibility of Fulde-Ferrell superfluidity. We develop a field theoretical formulation for both two-body and many-body problems and predict that the Fulde-Ferrell state has remarkable properties, such as anisotropic single-particle dispersion relation, suppressed superfluid density at zero temperature, anisotropic sound velocity, and rotonic collective mode. The latter two features can be experimentally probed using Bragg spectroscopy, providing a smoking-gun proof of Fulde-Ferrell superfluidity.

  5. Criteria for the Function Classes UBC and M1No.

    NASA Astrophysics Data System (ADS)

    Aulaskari, Rauno; Rättyä, Jouni

    2011-09-01

    For a function f, meromorphic in the unit disc D, and parameter values 0p/2-1|f'(z)|1+|f(z)|p. New characterizations for the classes UBC and M1♯ in terms of fp♯ are obtained.

  6. Tinnitus pitch and minimum masking levels in different etiologies.

    PubMed

    Zagólski, Olaf; Stręk, Paweł

    2014-07-01

    We sought to determine whether the results of audiological tests and tinnitus characteristics, particularly tinnitus pitch and minimum masking level (MML), depend on tinnitus etiology, and what other etiology-specific tinnitus characteristics there are. The patients answered questions concerning tinnitus laterality, duration, character, aggravation, alleviation, previous treatment, and circumstances of onset. The results of tympanometry, pure-tone audiometry, distortion-product otoacoustic emissions, tinnitus likeness spectrum, MML, and uncomfortable loudness level were evaluated. Patients with several tinnitus etiological factors were excluded. The remaining participants were divided into groups according to medical history: acute acoustic trauma: 67 ears; chronic acoustic trauma: 82; prolonged use of oral estrogen and progesterone contraceptives: 46; Ménière's disease: 25; congenital hearing loss: 19; sensorineural sudden deafness: 40; dull head trauma: 19; viral labyrinthitis: 53; stroke: 6; presbycusis: 152. Data of 509 ears were analysed. Tinnitus pitch was highest in patients with acute acoustic trauma and lowest in patients receiving estrogen and progesterone. MML was lowest after acute acoustic trauma and in congenital hearing loss, and highest after a stroke and in the case of presbytinnitus. Tinnitus pitch and MML are etiology dependent.

  7. Masses and β -Decay Spectroscopy of Neutron-Rich Odd-Odd Eu,162160 Nuclei: Evidence for a Subshell Gap with Large Deformation at N =98

    NASA Astrophysics Data System (ADS)

    Hartley, D. J.; Kondev, F. G.; Orford, R.; Clark, J. A.; Savard, G.; Ayangeakaa, A. D.; Bottoni, S.; Buchinger, F.; Burkey, M. T.; Carpenter, M. P.; Copp, P.; Gorelov, D. A.; Hicks, K.; Hoffman, C. R.; Hu, C.; Janssens, R. V. F.; Klimes, J. W.; Lauritsen, T.; Sethi, J.; Seweryniak, D.; Sharma, K. S.; Zhang, H.; Zhu, S.; Zhu, Y.

    2018-05-01

    The structure of deformed neutron-rich nuclei in the rare-earth region is of significant interest for both the astrophysics and nuclear structure fields. At present, a complete explanation for the observed peak in the elemental abundances at A ˜160 eludes astrophysicists, and models depend on accurate quantities, such as masses, lifetimes, and branching ratios of deformed neutron-rich nuclei in this region. Unusual nuclear structure effects are also observed, such as the unexpectedly low energies of the first 2+ levels in some even-even nuclei at N =98 . In order to address these issues, mass and β -decay spectroscopy measurements of the Eu97 160 and Eu99 162 nuclei were performed at the Californium Rare Isotope Breeder Upgrade radioactive beam facility at Argonne National Laboratory. Evidence for a gap in the single-particle neutron energies at N =98 and for large deformation (β2˜0.3 ) is discussed in relation to the unusual phenomena observed at this neutron number.

  8. Effect of cancellation in neutrinoless double beta decay

    NASA Astrophysics Data System (ADS)

    Mitra, Manimala; Pascoli, Silvia; Wong, Steven

    2014-11-01

    In light of recent experimental results, we carefully analyze the effects of interference in neutrinoless double beta decay, when more than one mechanism is operative. If a complete cancellation is at work, the half-life of the corresponding isotope is infinite, and any constraint on it will automatically be satisfied. We analyze this possibility in detail assuming a cancellation in Xe 136 , and find its implications on the half-life of other isotopes, such as Ge 76 . For definiteness, we consider the role of light and heavy sterile neutrinos. In this case, the effective Majorana mass parameter can be redefined to take into account all contributions, and its value gets suppressed. Hence, larger values of neutrino masses are required for the same half-life. The canonical light neutrino contribution cannot saturate the present limits of half-lives or the positive claim of observation of neutrinoless double beta decay, once the stringent bounds from cosmology are taken into account. For the case of cancellation, where all the sterile neutrinos are heavy, the tension between the results from neutrinoless double beta decay and cosmology becomes more severe. We show that the inclusion of light sterile neutrinos in this setup can resolve this issue. Using the recent results from GERDA, we derive upper limits on the active-sterile mixing angles and compare them with the case of no cancellation. The required values of the mixing angles become larger, if a cancellation is at work. A direct test of destructive interference in Xe 136 is provided by the observation of this process in other isotopes, and we study in detail the correlation between their half-lives. Finally, we discuss the model realizations which can accommodate light and heavy sterile neutrinos and the cancellation. We show that sterile neutrinos of few hundred MeV or GeV mass range, coming from an Extended seesaw framework or a further extension, can satisfy the required cancellation.

  9. Change of translational-rotational coupling in liquids revealed by field-cycling 1H NMR

    NASA Astrophysics Data System (ADS)

    Meier, R.; Schneider, E.; Rössler, E. A.

    2015-01-01

    Applying the field-cycling nuclear magnetic resonance technique, the frequency dependence of the 1H spin-lattice relaxation rate, R 1 ω = T1 - 1 ω , is measured for propylene glycol (PG) which is increasingly diluted with deuterated chloroform. A frequency range of 10 kHz-20 MHz and a broad temperature interval from 220 to about 100 K are covered. The results are compared to those of experiments, where glycerol and o-terphenyl are diluted with their deuterated counter-part. Reflecting intra- as well as intermolecular relaxation, the dispersion curves R 1 ω , x (x denotes mole fraction PG) allow to extract the rotational time constant τrot(T, x) and the self-diffusion coefficient D(T, x) in a single experiment. The Stokes-Einstein-Debye (SED) relation is tested in terms of the quantity D(T, x) τrot(T, x) which provides a measure of an effective hydrodynamic radius or equivalently of the spectral separation of the translational and the rotational relaxation contribution. In contrast to o-terphenyl, glycerol and PG show a spectral separation much larger than suggested by the SED relation. In the case of PG/chloroform mixtures, not only an acceleration of the PG dynamics is observed with increasing dilution but also the spectral separation of rotational and translational relaxation contributions continuously decreases. Finally, following a behavior similar to that of o-terphenyl already at about x = 0.6; i.e., while D(T, x) τrot(T, x) in the mixture is essentially temperature independent, it strongly increases with x signaling thus a change of translational-rotational coupling. This directly reflects the dissolution of the hydrogen-bond network and thus a change of solution structure.

  10. Microscopic study of low-lying spectra of Λ hypernuclei based on a beyond-mean-field approach with a covariant energy density functional

    NASA Astrophysics Data System (ADS)

    Mei, H.; Hagino, K.; Yao, J. M.; Motoba, T.

    2015-06-01

    We present a detailed formalism of the microscopic particle-rotor model for hypernuclear low-lying states based on a covariant density functional theory. In this method, the hypernuclear states are constructed by coupling a hyperon to low-lying states of the core nucleus, which are described by the generator coordinate method (GCM) with the particle number and angular momentum projections. We apply this method to study in detail the low-lying spectrum of C13Λ and Ne21Λ hypernuclei. We also briefly discuss the structure of Sm155Λ as an example of heavy deformed hypernuclei. It is shown that the low-lying excitation spectra with positive-parity states of the hypernuclei, which are dominated by Λ hyperon in the s orbital coupled to the core states, are similar to that for the corresponding core states, while the electric quadrupole transition strength, B (E 2 ) , from the 21+ state to the ground state is reduced according to the mass number of the hypernuclei. Our study indicates that the energy splitting between the first 1 /2- and 3 /2- hypernuclear states is generally small for all the hypernuclei which we study. However, their configurations depend much on the properties of a core nucleus, in particular on the sign of deformation parameter. That is, the first 1 /2- and 3 /2- states in Λ13C are dominated by a single configuration with Λ particle in the p -wave orbits and thus provide good candidates for a study of the Λ spin-orbit splitting. On the other hand, those states in the other hypernuclei exhibit a large configuration mixing and thus their energy difference cannot be interpreted as the spin-orbit splitting for the p orbits.

  11. Martian Meteorological Lander

    NASA Astrophysics Data System (ADS)

    Vorontsov, V.; Pichkhadze, K.; Polyakov, A.

    2002-01-01

    Martian meteorological lander (MML) is dedicated for landing onto the Mars surface with the purpose to carry on the monitoring of Mars atmosphere condition at a landing point during one Martian year. MML is supposed to become the basic element of a global net of meteorological mini stations and will permit to observe the dynamics of Martian atmosphere parameters changes during a long time duration. The main scientific tasks of MML are as follows: -study of vertical structure of Mars atmosphere during MML descending; -meteorological observations on Mars surface during one Martian year. One of the essential factor influencing to the lander design is descent trajectory design. During the preliminary phase of development five (5) options of MML were considered. In our opinion, these variants provide the accomplishment of the above-mentioned tasks with a high effectiveness. Joined into the first group, variants with parachute system and with Inflatable Air Brakes+Inflatable Airbag are similar in arranging of pre-landing braking stage and completely analogous in landing by means of airbags. The usage of additional Inflatable Braking Unit (IBU) in the second variant does not affect the procedure of braking - decreasing of velocity by the moment of touching the surface due to decreasing of ballistic parameter Px. A distinctive feature of MML development variants of other three concepts is the presence of Inflatable Braking Unit (IBU) in their configurations (IBU is rigidly joined with landing module up to the moment of its touching the surface). Besides, in variant with the tore-shaped IBU it acts as a shock- absorbing unit. In two options, Inflatable Braking Shock-Absorbing Unit (IBSAU) (or IBU) releases the surface module after its landing at the moment of IBSAU (or IBU) elastic recoil. Variants of this concept are equal in terms of mass (approximately 15 kg). For variants of concepts with IBU the landing velocity is up to50-70 m/s. Stations of last three options are much more reliable in comparison with MML of first and second options because their functional diagram is realized by operation of 3-4 (instead of 8-10 for MML of first and second concepts) executive devices. A distinctive moment for MML of last three concepts , namely for variants 3 and 5, is the final stage of landing stipulated by penetration of forebody into the soil. Such a profile of landing was taken into account during the development of one of the landing vehicles for the "MARS-96" SC. This will permit to implement simple technical decisions for putting the meteorological complex into operation and to carry out its further operations on the surface. After comparative analysis of 5 concepts for the more detailed development concepts with parachute system and with IBU and penetration unit have been chosen as most prospective. However, finally, on the next step the new modification of the lander (hybrid version of third and fifth option with inflatable braking device and penetrating unit) has been proposed and chosen for the next step of development. The several small stations should be transported to Mars in frameworks of Scout Mars mission, or Phobos Sample Return mission as piggyback payload.

  12. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  13. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  14. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  15. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavel, D.T.

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  16. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  17. Origin of generalized entropies and generalized statistical mechanics for superstatistical multifractal systems

    NASA Astrophysics Data System (ADS)

    Gadjiev, Bahruz; Progulova, Tatiana

    2015-01-01

    We consider a multifractal structure as a mixture of fractal substructures and introduce a distribution function f (α), where α is a fractal dimension. Then we can introduce g(p)˜ ∫- ln p μe-yf(y)dy and show that the distribution functions f (α) in the form of f(α) = δ(α-1), f(α) = δ(α-θ) , f(α) = 1/α-1 , f(y)= y α-1 lead to the Boltzmann - Gibbs, Shafee, Tsallis and Anteneodo - Plastino entropies conformably. Here δ(x) is the Dirac delta function. Therefore the Shafee entropy corresponds to a fractal structure, the Tsallis entropy describes a multifractal structure with a homogeneous distribution of fractal substructures and the Anteneodo - Plastino entropy appears in case of a power law distribution f (y). We consider the Fokker - Planck equation for a fractal substructure and determine its stationary solution. To determine the distribution function of a multifractal structure we solve the two-dimensional Fokker - Planck equation and obtain its stationary solution. Then applying the Bayes theorem we obtain a distribution function for the entire system in the form of q-exponential function. We compare the results of the distribution functions obtained due to the superstatistical approach with the ones obtained according to the maximum entropy principle.

  18. Large magnetoresistance and Fermi surface study of Sb2Se2Te single crystal

    NASA Astrophysics Data System (ADS)

    Shrestha, K.; Marinova, V.; Graf, D.; Lorenz, B.; Chu, C. W.

    2017-09-01

    We have studied the magnetotransport properties of a Sb2Se2Te single crystal. Magnetoresistance (MR) is maximum when the magnetic field is perpendicular to the sample surface and reaches a value of 1100% at B = 31 T with no sign of saturation. MR shows Shubnikov de Haas (SdH) oscillations above B = 15 T. The frequency spectrum of SdH oscillations consists of three distinct peaks at α = 32 T, β = 80 T, and γ = 117 T indicating the presence of three Fermi surface pockets. Among these frequencies, β is the prominent peak in the frequency spectrum of SdH oscillations measured at different tilt angles of the sample with respect to the magnetic field. From the angle dependence β and Berry phase calculations, we have confirmed the trivial topology of the β-pocket. The cyclotron masses of charge carriers, obtained by using the Lifshitz-Kosevich formula, are found to be mβ*=0.16mo and m γ*=0.63 mo for the β and γ bands, respectively. The Large MR of Sb2Se2Te is suitable for utilization in electronic instruments such as computer hard discs, high field magnetic sensors, and memory devices.

  19. A D-octapeptide drug efflux pump inhibitor acts synergistically with azoles in a murine oral candidiasis infection model.

    PubMed

    Hayama, Kazumi; Ishibashi, Hiroko; Ishijima, Sanae A; Niimi, Kyoko; Tansho, Shigeru; Ono, Yasuo; Monk, Brian C; Holmes, Ann R; Harding, David R K; Cannon, Richard D; Abe, Shigeru

    2012-03-01

    Clinical management of patients undergoing treatment of oropharyngeal candidiasis with azole antifungals can be impaired by azole resistance. High-level azole resistance is often caused by the overexpression of Candida albicans efflux pump Cdr1p. Inhibition of this pump therefore represents a target for combination therapies that reverse azole resistance. We assessed the therapeutic potential of the D-octapeptide derivative RC21v3, a Cdr1p inhibitor, in the treatment of murine oral candidiasis caused by either the azole-resistant C. albicans clinical isolate MML611 or its azole-susceptible parental strain MML610. RC21v3, fluconazole (FLC), or a combination of both drugs were administered orally to immunosuppressed ICR mice at 3, 24, and 27 h after oral inoculation with C. albicans. FLC protected the mice inoculated with MML610 from oral candidiasis, but was only partially effective in MML611-infected mice. The co-application of RC21v3 (0.02 μmol per dose) potentiated the therapeutic performance of FLC for mice infected with either strain. It caused a statistically significant decrease in C. albicans cfu isolated from the oral cavity of the infected mice and reduced oral lesions. RC21v3 also enhanced the therapeutic activity of itraconazole against MML611 infection. These results indicate that RC21v3 in combination with azoles has potential as a therapy against azole-resistant oral candidiasis. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  20. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  1. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  2. Algebraic independence results for reciprocal sums of Fibonacci and Lucas numbers

    NASA Astrophysics Data System (ADS)

    Stein, Martin

    2011-09-01

    Let Fn and Ln denote the Fibonacci and Lucas numbers, respectively. D. Duverney, Ke. Nishioka, Ku. Nishioka and I. Shiokawa proved that the values of the Fibonacci zeta function ζF(2s) = Σn = 1∞Fn-2s are transcendental for any s∈N using Nesterenko's theorem on Ramanujan functions P(q), Q(q), and R(q). They obtained similar results for the Lucas zeta function ζL(2s) = Σn = 1∞Ln-2s and some related series. Later, C. Elsner, S. Shimomura and I. Shiokawa found conditions for the algebraic independence of these series. In my PhD thesis I generalized their approach and treated the following problem: We investigate all subsets of { ∑ n = 1∞1/Fn2s1, ∑ n = 1∞(-1)n+1/Fn2s2, ∑ n = 1∞1/Ln2s3, ∑ n = 1∞(-1)n+1/Ln2s4:s1,s2,s3,s4∈N} and decide on their algebraic independence over Q. Actually this is a special case of a more general theorem for reciprocal sums of binary recurrent sequences.

  3. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  4. Modification of the MML turbulence model for adverse pressure gradient flows. M.S. Thesis - Akron Univ., 1993

    NASA Technical Reports Server (NTRS)

    Conley, Julianne M.

    1994-01-01

    Computational fluid dynamics is being used increasingly to predict flows for aerospace propulsion applications, yet there is still a need for an easy to use, computationally inexpensive turbulence model capable of accurately predicting a wide range of turbulent flows. The Baldwin-Lomax model is the most widely used algebraic model, even though it has known difficulties calculating flows with strong adverse pressure gradients and large regions of separation. The modified mixing length model (MML) was developed specifically to handle the separation which occurs on airfoils and has given significantly better results than the Baldwin-Lomax model. The success of these calculations warrants further evaluation and development of MML. The objective of this work was to evaluate the performance of MML for zero and adverse pressure gradient flows, and modify it as needed. The Proteus Navier-Stokes code was used for this study and all results were compared with experimental data and with calculations made using the Baldwin-Lomax algebraic model, which is currently available in Proteus. The MML model was first evaluated for zero pressure gradient flow over a flat plate, then modified to produce the proper boundary layer growth. Additional modifications, based on experimental data for three adverse pressure gradient flows, were also implemented. The adapted model, called MMLPG (modified mixing length model for pressure gradient flows), was then evaluated for a typical propulsion flow problem, flow through a transonic diffuser. Three cases were examined: flow with no shock, a weak shock and a strong shock. The results of these calculations indicate that the objectives of this study have been met. Overall, MMLPG is capable of accurately predicting the adverse pressure gradient flows examined in this study, giving generally better agreement with experimental data than the Baldwin-Lomax model.

  5. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  6. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  7. Emergence of Clusters: Halos, Efimov States, and Experimental Signals

    NASA Astrophysics Data System (ADS)

    Hove, D.; Garrido, E.; Sarriguren, P.; Fedorov, D. V.; Fynbo, H. O. U.; Jensen, A. S.; Zinner, N. T.

    2018-02-01

    We investigate the emergence of halos and Efimov states in nuclei by use of a newly designed model that combines self-consistent mean-field and three-body descriptions. Recent interest in neutron heavy calcium isotopes makes Ca 72 (Ca 70 +n +n ) an ideal realistic candidate on the neutron dripline, and we use it as a representative example that illustrates our broadly applicable conclusions. By smooth variation of the interactions we simulate the crossover from well-bound systems to structures beyond the threshold of binding, and find that halo configurations emerge from the mean-field structure for three-body binding energy less than ˜100 keV . Strong evidence is provided that Efimov states cannot exist in nuclei. The structure that bears the most resemblance to an Efimov state is a giant halo extending beyond the neutron-core scattering length. We show that the observable large-distance decay properties of the wave function can differ substantially from the bulk part at short distances, and that this evolution can be traced with our combination of few- and many-body formalisms. This connection is vital for interpretation of measurements such as those where an initial state is populated in a reaction or by a beta decay.

  8. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  9. New applications of maximum likelihood and Bayesian statistics in macromolecular crystallography.

    PubMed

    McCoy, Airlie J

    2002-10-01

    Maximum likelihood methods are well known to macromolecular crystallographers as the methods of choice for isomorphous phasing and structure refinement. Recently, the use of maximum likelihood and Bayesian statistics has extended to the areas of molecular replacement and density modification, placing these methods on a stronger statistical foundation and making them more accurate and effective.

  10. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  11. Study of tissue oxygen supply rate in a macroscopic photodynamic therapy singlet oxygen model

    NASA Astrophysics Data System (ADS)

    Zhu, Timothy C.; Liu, Baochang; Penjweini, Rozhin

    2015-03-01

    An appropriate expression for the oxygen supply rate (Γs) is required for the macroscopic modeling of the complex mechanisms of photodynamic therapy (PDT). It is unrealistic to model the actual heterogeneous tumor microvascular networks coupled with the PDT processes because of the large computational requirement. In this study, a theoretical microscopic model based on uniformly distributed Krogh cylinders is used to calculate Γs=g (1-[O]/[]0) that can replace the complex modeling of blood vasculature while maintaining a reasonable resemblance to reality; g is the maximum oxygen supply rate and [O]/[]0 is the volume-average tissue oxygen concentration normalized to its value prior to PDT. The model incorporates kinetic equations of oxygen diffusion and convection within capillaries and oxygen saturation from oxyhemoglobin. Oxygen supply to the tissue is via diffusion from the uniformly distributed blood vessels. Oxygen can also diffuse along the radius and the longitudinal axis of the cylinder within tissue. The relations of Γs to [3O2]/] are examined for a biologically reasonable range of the physiological parameters for the microvasculature and several light fluence rates (ϕ). The results show a linear relationship between Γs and [3O2]/], independent of ϕ and photochemical parameters; the obtained g ranges from 0.4 to 1390 μM/s.

  12. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  13. Restructuring an EHR system and the Medical Markup Language (MML) standard to improve interoperability by archetype technology.

    PubMed

    Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki

    2015-01-01

    In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system.

  14. The Art of Red Tide Science

    PubMed Central

    Hall, Emily R.; Nierenberg, Kate; Boyes, Anamari J.; Heil, Cynthia A.; Flewelling, Leanne J.; Kirkpatrick, Barbara

    2012-01-01

    Over the years, numerous outreach strategies by the science community, such as FAQ cards and website information, have been used to explain blooms of the toxic dinoflagellate, Karenia brevis that occur annually off the west coast of Florida to the impacted communities. Many state and federal agencies have turned to funded research groups for assistance in the development and testing of environmental outreach products. In the case of Florida red tide, the Fish and Wildlife Research Institute/Mote Marine Laboratory (MML) Cooperative Red Tide Agreement allowed MML to initiate a project aimed at developing innovative outreach products about Florida red tide. This project, which we coined “The Art of Red Tide Science,” consisted of a team effort between scientists from MML and students from Ringling College of Art and Design. This successful outreach project focused on Florida red tide can be used as a model to develop similar outreach projects for equally complex ecological issues. PMID:22712002

  15. The Art of Red Tide Science.

    PubMed

    Hall, Emily R; Nierenberg, Kate; Boyes, Anamari J; Heil, Cynthia A; Flewelling, Leanne J; Kirkpatrick, Barbara

    2012-05-01

    Over the years, numerous outreach strategies by the science community, such as FAQ cards and website information, have been used to explain blooms of the toxic dinoflagellate, Karenia brevis that occur annually off the west coast of Florida to the impacted communities. Many state and federal agencies have turned to funded research groups for assistance in the development and testing of environmental outreach products. In the case of Florida red tide, the Fish and Wildlife Research Institute/Mote Marine Laboratory (MML) Cooperative Red Tide Agreement allowed MML to initiate a project aimed at developing innovative outreach products about Florida red tide. This project, which we coined "The Art of Red Tide Science," consisted of a team effort between scientists from MML and students from Ringling College of Art and Design. This successful outreach project focused on Florida red tide can be used as a model to develop similar outreach projects for equally complex ecological issues.

  16. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  17. A maximum likelihood map of chromosome 1.

    PubMed Central

    Rao, D C; Keats, B J; Lalouel, J M; Morton, N E; Yee, S

    1979-01-01

    Thirteen loci are mapped on chromosome 1 from genetic evidence. The maximum likelihood map presented permits confirmation that Scianna (SC) and a fourteenth locus, phenylketonuria (PKU), are on chromosome 1, although the location of the latter on the PGM1-AMY segment is uncertain. Eight other controversial genetic assignments are rejected, providing a practical demonstration of the resolution which maximum likelihood theory brings to mapping. PMID:293128

  18. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  19. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  20. Comparison of Maximum Likelihood Estimation Approach and Regression Approach in Detecting Quantitative Trait Lco Using RAPD Markers

    Treesearch

    Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine

    1999-01-01

    Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...

  1. Field, Laboratory and Imaging spectroscopic Analysis of Landslide, Debris Flow and Flood Hazards in Lacustrine, Aeolian and Alluvial Fan Deposits Surrounding the Salton Sea, Southern California

    NASA Astrophysics Data System (ADS)

    Hubbard, B. E.; Hooper, D. M.; Mars, J. C.

    2015-12-01

    High resolution satellite imagery, field spectral measurements using a portable ASD spectrometer, and 2013 hyperspectral AVIRIS imagery were used to evaluate the age of the Martinez Mountain Landslide (MML) near the Salton Sea, in order to determine the relative ages of adjacent alluvial fan surfaces and the potential for additional landslides, debris flows, and floods. The Salton Sea (SS) occupies a pluvial lake basin, with ancient shorelines ranging from 81 meters to 113 meters above the modern lake level. The highest shoreline overlaps the toe of the 0.24 - 0.38 km3 MML deposit derived from hydrothermally altered granites exposed near the summit of Martinez Mountain. The MML was originally believed to be of early Holocene age. However, AVIRIS mineral maps show abundant desert varnish on the top and toe of the landslide. Desert varnish can provide a means of relative dating of alluvial fan (AF) or landslide surfaces, as it accumulates at determinable rates over time. Based on the 1) highest levels of desert varnish accumulation mapped within the basin, 2) abundant evaporite playa minerals on top of the toe of the landslide, and 3) the highest shoreline of the ancestral lake overtopping the toe of the landslide with gastropod and bivalve shells, we conclude that the MML predates the oldest alluvial fan terraces and lake sediments exposed in the Coachella and Imperial valleys and must be older than early Holocene (i.e. Late Pleistocene?). Thus, the MML landslide has the potential to be used as a spectral endmember for desert varnish thickness and thus proxy for age discrimination of active AF washes versus desert pavements. Given the older age of the MML landslide and low water levels in the modern SS, the risk from future rockslides of this size and related seiches is rather low. However, catastrophic floods and debris flows do occur along the most active AF channels; and the aftermath of such flows can be identified spectrally by montmorillonite crusts forming in recently flooded channels, as well as coarse-grained hyper-concentrated flow deposits that leave sorted (dark) heavy mineral concentrate behind. These observations, as well as supporting spectroscopic and change detection studies, will allow us to evaluate such hazards in this and similar inter-montane pluvial basins around the world.

  2. Loose regulation of medical marijuana programs associated with higher rates of adult marijuana use but not cannabis use disorder.

    PubMed

    Williams, Arthur Robin; Santaella-Tenorio, Julian; Mauro, Christine M; Levin, Frances R; Martins, Silvia S

    2017-11-01

    Most US states have passed medical marijuana laws (MMLs), with great variation in program regulation impacting enrollment rates. We aimed to compare changes in rates of marijuana use, heavy use and cannabis use disorder across age groups while accounting for whether states enacted medicalized (highly regulated) or non-medical mml programs. Difference-in-differences estimates with time-varying state-level MML coded by program type (medicalized versus non-medical). Multi-level linear regression models adjusted for state-level random effects and covariates as well as historical trends in use. Nation-wide cross-sectional survey data from the US National Survey of Drug Use and Health (NSDUH) restricted use data portal aggregated at the state level. Participants comprised 2004-13 NSDUH respondents (n ~ 67 500/year); age groups 12-17, 18-25 and 26+ years. States had implemented eight medicalized and 15 non-medical MML programs. Primary outcome measures included (1) active (past-month) marijuana use; (2) heavy use (> 300 days/year); and (3) cannabis use disorder diagnosis, based on DSM-IV criteria. Covariates included program type, age group and state-level characteristics throughout the study period. Adults 26+ years of age living in states with non-medical MML programs increased past-month marijuana use 1.46% (from 4.13 to 6.59%, P = 0.01), skewing towards greater heavy marijuana by 2.36% (from 14.94 to 17.30, P = 0.09) after MMLs were enacted. However, no associated increase in the prevalence of cannabis use disorder was found during the study period. Our findings do not show increases in prevalence of marijuana use among adults in states with medicalized MML programs. Additionally, there were no increases in adolescent or young adult marijuana outcomes following MML passage, irrespective of program type. Non-medical marijuana laws enacted in US states are associated with increased marijuana use, but only among adults aged 26+ years. Researchers and policymakers should consider program regulation and subgroup characteristics (i.e. demographics) when assessing for population level outcomes. Researchers and policymakers should consider program regulation and subgroup characteristics (i.e. demographics) when assessing for population level outcomes. © 2017 Society for the Study of Addiction.

  3. The modified Misgav-Ladach versus the Pfannenstiel-Kerr technique for cesarean section: a randomized trial.

    PubMed

    Xavier, Pedro; Ayres-De-Campos, Diogo; Reynolds, Ana; Guimarães, Mariana; Costa-Santos, Cristina; Patrício, Belmiro

    2005-09-01

    Modifications to the classic cesarean section technique described by Pfannenstiel and Kerr have been proposed in the last few years. The objective of this trial was to compare intraoperative and short-term postoperative outcomes between the Pfannenstiel-Kerr and the modified Misgav-Ladach (MML) techniques for cesarean section. This prospective randomized trial involved 162 patients undergoing transverse lower uterine segment cesarean section. Patients were allocated to one of the two arms: 88 to the MML technique and 74 to the Pfannenstiel-Kerr technique. Main outcome measures were defined as the duration of surgery, analgesic requirements, and bowel restitution by the second postoperative day. Additional outcomes evaluated were febrile morbidity, postoperative antibiotic use, postpartum endometritis, and wound complications. Student's t, Mann-Whitney, and Chi-square tests were used for statistical analysis of the results, and a p < 0.05 was considered as the probability level reflecting significant differences. No differences between groups were noted in the incidence of analgesic requirements, bowel restitution by the second postoperative day, febrile morbidity, antibiotic requirements, endometritis, or wound complications. The MML technique took on average 12 min less to complete (p = 0.001). The MML technique is faster to perform and similar in terms of febrile morbidity, time to bowel restitution, or need for postoperative medications. It is likely to be more cost-effective.

  4. Systematic study of cluster radioactivity of superheavy nuclei

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Wang, Y. Z.

    2018-01-01

    The probable cluster radioactivity (CR) of 294118, 296120, and 298122 is studied by using the unified description (UD) formula, universal (UNIV) curve, Horoi formula, and universal decay law (UDL). The predictions by the former three models suggest that the probable emitted clusters are lighter nuclei, and the calculations within the UDL formula give a different prediction: that both the lighter clusters and heavier ones can be emitted from the parent nuclei. A further study on the competition between α decay and CR of Z =104 -124 isotopes is performed. The former three models predict that α decay is the dominant decay mode, but the UDL formula suggests that CR dominates over α decay for Z ≥118 nuclei and the isotopes of 118 292 -296 ,308 -318 , 120 , 284 -304 ,308 -324 and 122-322316 are the most likely candidates as the cluster emitters. Because the former three formulas are just preformation models, the lighter cluster emissions can be described. However, the UDL formula can predict the lighter and heavier CR owing to the inclusion of the preformation and fissionlike mechanisms. Finally, it is found that the shortest CR half-lives are always obtained when the daughter nuclei are around the double magic 208Pb within the UDL formula, which indicates that shell effect has an important influence on CR.

  5. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  6. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  7. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  8. Computer-automated tinnitus assessment: noise-band matching, maskability, and residual inhibition.

    PubMed

    Henry, James A; Roberts, Larry E; Ellingson, Roger M; Thielman, Emily J

    2013-06-01

    Psychoacoustic measures of tinnitus typically include loudness and pitch match, minimum masking level (MML), and residual inhibition (RI). We previously developed and documented a computer-automated tinnitus evaluation system (TES) capable of subject-guided loudness and pitch matching. The TES was further developed to conduct computer-aided, subject-guided testing for noise-band matching (NBM), MML, and RI. The purpose of the present study was to document the capability of the upgraded TES to obtain measures of NBM, MML, and RI, and to determine the test-retest reliability of the responses obtained. Three subject-guided, computer-automated testing protocols were developed to conduct NBM. For MML and RI testing, a 2-12 kHz band of noise was used. All testing was repeated during a second session. Subjects meeting study criteria were selected from those who had previously been tested for loudness and pitch matching in our laboratory. A total of 21 subjects completed testing, including seven females and 14 males. The upgraded TES was found to be fairly time efficient. Subjects were generally reliable, both within and between sessions, with respect to the type of stimulus they chose as the best match to their tinnitus. Matching to bandwidth was more variable between measurements, with greater consistency seen for subjects reporting tonal tinnitus or wide-band noisy tinnitus than intermediate types. Between-session repeated MMLs were within 10 dB of each other for all but three of the subjects. Subjects who experienced RI during Session 1 tended to be those who experienced it during Session 2. This study may represent the first time that NBM, MML, and RI audiometric testing results have been obtained entirely through a self-contained, computer-automated system designed specifically for use in the clinic. Future plans include refinements to achieve greater testing efficiency. American Academy of Audiology.

  9. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  10. An evaluation of several different classification schemes - Their parameters and performance. [maximum likelihood decision for crop identification

    NASA Technical Reports Server (NTRS)

    Scholz, D.; Fuhs, N.; Hixson, M.

    1979-01-01

    The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.

  11. Vector MO magnetometry for mapping microwave currents

    NASA Astrophysics Data System (ADS)

    Višňovský, Š.; Lišková-Jakubisová, E.; Harward, I.; Celinski, Z.

    2018-05-01

    Magneto-optic (MO) effects in magnetic multilayers (MML) can be employed in non-invasive 2D mapping of microwave (mw) radiation on the surface of semiconductor chips. A typical sensor configuration consists of Fe nanolayers sandwiched with dielectrics on a thin Si substrate transparent to mw radiation. To extend the observation bandwidth, Δf, up to 100 GHz range the sensor works at ferromagnetic resonance (FMR) frequency in applied magnetic flux density, Bappl. The mw currents excite the precession of magnetization, M, in magnetic nanolayers proportional to their amplitude. The MO component reflected on the sensor surface is proportional to the amplitude of M component, M⊥. The laser source operates at the wavelength of 410 nm. Its plane of incidence is oriented perpendicular to the M⊥ plane. M⊥ oscillates between polar and transverse configurations. A substantial improvement of MO figure of merit takes place in aperiodic MML. More favorable Δf vs. Bappl dependence and MO response can potentially be achieved in MML imbedding hexagonal ferrite or Co nanolayers with in-plane magnetic anisotropy.

  12. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  13. Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference

    DTIC Science & Technology

    1990-11-01

    Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR

  14. Improved thermal lattice Boltzmann model for simulation of liquid-vapor phase change

    NASA Astrophysics Data System (ADS)

    Li, Qing; Zhou, P.; Yan, H. J.

    2017-12-01

    In this paper, an improved thermal lattice Boltzmann (LB) model is proposed for simulating liquid-vapor phase change, which is aimed at improving an existing thermal LB model for liquid-vapor phase change [S. Gong and P. Cheng, Int. J. Heat Mass Transfer 55, 4923 (2012), 10.1016/j.ijheatmasstransfer.2012.04.037]. First, we emphasize that the replacement of ∇ .(λ ∇ T ) /∇.(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) is an inappropriate treatment for diffuse interface modeling of liquid-vapor phase change. Furthermore, the error terms ∂t 0(T v ) +∇ .(T vv ) , which exist in the macroscopic temperature equation recovered from the previous model, are eliminated in the present model through a way that is consistent with the philosophy of the LB method. Moreover, the discrete effect of the source term is also eliminated in the present model. Numerical simulations are performed for droplet evaporation and bubble nucleation to validate the capability of the model for simulating liquid-vapor phase change. It is shown that the numerical results of the improved model agree well with those of a finite-difference scheme. Meanwhile, it is found that the replacement of ∇ .(λ ∇ T ) /∇ .(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) leads to significant numerical errors and the error terms in the recovered macroscopic temperature equation also result in considerable errors.

  15. Very strong Rydberg atom scattering in K(12p)-CH3NO2 collisions: Role of transient ion pair formation

    NASA Astrophysics Data System (ADS)

    Kelley, M.; Buathong, S.; Dunning, F. B.

    2017-05-01

    Collisions between K(12p) Rydberg atoms and CH3NO2 target molecules are studied. Whereas CH3NO2 can form long-lived valence-bound CH3NO2-ions, the data provide no evidence for production of long-lived K+⋯ CH3NO2 - ion pair states. Rather, the data show that collisions result in unusually strong Rydberg atom scattering. This behavior is attributed to ion-ion scattering resulting from formation of transient ion pair states through transitions between the covalent K(12p) + CH3NO2 and ionic K+ + (dipole bound) CH3NO2-terms in the quasimolecule formed during collisions. The ion-pair states are destroyed through rapid dissociation of the CH3NO2 - ions induced by the field of the K+ core ion, the detached electron remaining bound to the K+ ion in a Rydberg state. Analysis of the experimental data shows that ion pair lifetimes ≳10 ps are sufficient to account for the present observations. The present results are consistent with recent theoretical predictions that Rydberg collisions with CH3NO2 will result in strong collisional quenching. The work highlights a new mechanism for Rydberg atom scattering that could be important for collisions with other polar targets. For purposes of comparison, results obtained following K(12p)-SF6 collisions are also included.

  16. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  17. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  18. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  19. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  20. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  1. Arithmetical functions and irrationality of Lambert series

    NASA Astrophysics Data System (ADS)

    Duverney, Daniel

    2011-09-01

    We use a method of Erdös in order to prove the linear independence over Q of the numbers 1, ∑ n = 1+∞1/qn2-1, ∑ n = 1+∞n/qn2-1 for every q∈Z, with |q|≥2. The main idea consists in considering the two above series as Lambert series. This allows to expand them as power series of 1/q. The Taylor coefficients of these expansions are arithmetical functions, whose properties allow to apply an elementary irrationality criterion, which yields the result.

  2. A novel nanobiotherapeutic poly-[hemoglobin-superoxide dismutase-catalase-carbonic anhydrase] with no cardiac toxicity for the resuscitation of a rat model with 90 minutes of sustained severe hemorrhagic shock with loss of 2/3 blood volume

    PubMed Central

    Bian, Yuzhu; Chang, Thomas Ming Swi

    2015-01-01

    Abstract We crosslink hemoglobin (Hb), superoxide dismutase (SOD), catalase (CAT), and carbonic anhydrase (CA) to form a soluble polyHb-SOD-CAT-CA nanobiotechnological complex. The obtained product is a soluble complex with three enhanced red blood cell (RBC) functions and without blood group antigens. In the present study, 2/3 of blood volume was removed to result in 90-min hemorrhagic shock at mean arterial blood pressure (MAP) of 30 mmHg. This was followed by the reinfusion of different resuscitation fluids, then followed for another 60 min. PolyHb-SOD-CAT-CA maintained the MAP at 87.5 ± 5 mmHg as compared with 3 volumes of lactated Ringer's solution, 43.3 ± 2.8 mmHg; blood, 91.3 ± 3.6 mmHg; polyHb-SOD-CAT, 86.0 ± 4.6 mmHg; poly stroma-free hemolysate (polySFHb), 85.0 ± 2.5 mmHg; and polyHb, 82.6 ± 3.5 mmHg. PolyHb-SOD-CAT-CA was superior to the blood and other fluids based on the following criteria. PolyHb-SOD-CAT-CA reduced tissue pCO2 from 98 ± 4.5 mmHg to 68.6 ± 3 mmHg. This was significantly (p < 0.05) more effective than lactated Ringer's solution (98 ± 4.5 mmHg), polyHb (90.1 ± 4.0 mmHg), polyHb-SOD-CAT (90.9 ± 1.4 mmHg), blood (79.1 ± 4.7 mmHg), and polySFHb (77 ± 5 mmHg). PolyHb-SOD-CAT-CA reduced the elevated ST level to 21.7 ± 6.7% and is significantly (< 0.05) better than polyHb (57.7 ± 8.7%), blood (39.1 ± 1.5%), polySFHb (38.3% ± 2.1%), polyHb-SOD-CAT (27.8 ± 5.6%), and lactated Ringer's solution (106 ± 3.1%). The plasma cardiac troponin T (cTnT) level of polyHb-SOD-CAT-CA group was significantly (P < 0.05) lower than that of all the other groups. PolyHb-SOD-CAT-CA reduced plasma lactate level from 18 ± 2.3 mM/L to 6.9 ± 0.3 mM/L. It was significantly more effective (P < 0.05) than lactated Ringer's solution (12.4 ± 0.6 mM/L), polyHb (9.6 ± 0.7 mM/L), blood (8.1 ± 0.2 mM/L), polySFHb (8.4 ± 0.1 mM/L), and polyHb-SOD-CAT (7.6 ± 0.3 mM/L). PolyHb-SOD-CAT-CA can be stored for 320 days at room temperature. Lyophilized poly-Hb-SOD-CAT-CA can be heat pasteurized at 68F for 2 h. This can be important if there is a need to inactivate human immunodeficiency virus, Ebola virus, and other infectious organisms. PMID:25297052

  3. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  4. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  5. A qualitative analysis of coronary heart disease patient views of dietary adherence and web-based and mobile-based nutrition tools

    PubMed Central

    Yehle, Karen S.; Chen, Aleda M. H.; Plake, Kimberly S.; Yi, Ji Soo; Mobley, Amy R.

    2012-01-01

    PURPOSE Dietary adherence can be challenging for patients with coronary heart disease (CHD), as they may require multiple dietary changes. Choosing appropriate food items may be difficult or take extensive amounts of time without the aid of technology. The objective of this project was to (1) examine the dietary challenges faced by patients with CHD, (2) examine methods of coping with dietary challenges, (3) explore the feasibility of a web-based food decision support system, and (4) explore the feasibility of a mobile-based food decision support system. METHODS Food for the Heart (FFH), a website-based food decision support system, and Mobile Magic Lens (MML), a mobile-based system, were developed to aid in daily dietary choices. Three CHD patient focus groups were conducted and focused on CHD-associated dietary changes as well as the FFH and MML prototypes. A total of 20 CHD patients and 7 informal caregivers participated. Qualitative, content analysis was performed to find themes grounded in the responses. RESULTS Five predominant themes emerged: 1) decreasing carbohydrate intake and portion control are common dietary challenges, 2) clinician and social support makes dietary adherence easier, 3) FFH could make meal-planning and dietary adherence less complicated, 4) MML could save time and assist with healthy choices, and 5) additional features need to be added to make both tools more comprehensive. CONCLUSIONS FFH and MML may be tools that CHD patients would value in making food choices and adhering to dietary recommendations, especially if additional features are added to assist patients with changes. PMID:22760245

  6. Transitional-turbulent spots and turbulent-turbulent spots in boundary layers

    NASA Astrophysics Data System (ADS)

    Wu, Xiaohua; Moin, Parviz; Wallace, James M.; Skarda, Jinhie; Lozano-Durán, Adrián; Hickey, Jean-Pierre

    2017-07-01

    Two observations drawn from a thoroughly validated direct numerical simulation of the canonical spatially developing, zero-pressure gradient, smooth, flat-plate boundary layer are presented here. The first is that, for bypass transition in the narrow sense defined herein, we found that the transitional-turbulent spot inception mechanism is analogous to the secondary instability of boundary-layer natural transition, namely a spanwise vortex filament becomes a ΛΛ vortex and then, a hairpin packet. Long streak meandering does occur but usually when a streak is infected by a nearby existing transitional-turbulent spot. Streak waviness and breakdown are, therefore, not the mechanisms for the inception of transitional-turbulent spots found here. Rather, they only facilitate the growth and spreading of existing transitional-turbulent spots. The second observation is the discovery, in the inner layer of the developed turbulent boundary layer, of what we call turbulent-turbulent spots. These turbulent-turbulent spots are dense concentrations of small-scale vortices with high swirling strength originating from hairpin packets. Although structurally quite similar to the transitional-turbulent spots, these turbulent-turbulent spots are generated locally in the fully turbulent environment, and they are persistent with a systematic variation of detection threshold level. They exert indentation, segmentation, and termination on the viscous sublayer streaks, and they coincide with local concentrations of high levels of Reynolds shear stress, enstrophy, and temperature fluctuations. The sublayer streaks seem to be passive and are often simply the rims of the indentation pockets arising from the turbulent-turbulent spots.

  7. Effects of heat sink and source and entropy generation on MHD mixed convection of a Cu-water nanofluid in a lid-driven square porous enclosure with partial slip

    NASA Astrophysics Data System (ADS)

    Chamkha, A. J.; Rashad, A. M.; Mansour, M. A.; Armaghani, T.; Ghalambaz, M.

    2017-05-01

    In this work, the effects of the presence of a heat sink and a heat source and their lengths and locations and the entropy generation on MHD mixed convection flow and heat transfer in a porous enclosure filled with a Cu-water nanofluid in the presence of partial slip effect are investigated numerically. Both the lid driven vertical walls of the cavity are thermally insulated and are moving with constant and equal speeds in their own plane and the effect of partial slip is imposed on these walls. A segment of the bottom wall is considered as a heat source meanwhile a heat sink is placed on the upper wall of cavity. There are heated and cold parts placed on the bottom and upper walls, respectively, while the remaining parts are thermally insulated. Entropy generation and local heat transfer according to different values of the governing parameters are presented in detail. It is found that the addition of nanoparticles decreases the convective heat transfer inside the porous cavity at all ranges of the heat sink and source lengths. The results for the effects of the magnetic field show that the average Nusselt number decreases considerably upon the enhancement of the Hartmann number. Also, adding nanoparticles to a pure fluid leads to increasing the entropy generation for all values of D for λl=-λr = 1 .

  8. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  9. Closed-loop carrier phase synchronization techniques motivated by likelihood functions

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Hinedi, S.; Simon, M.

    1994-01-01

    This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.

  10. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  11. Selective Optical Addressing of Nuclear Spins through Superhyperfine Interaction in Rare-Earth Doped Solids

    NASA Astrophysics Data System (ADS)

    Car, B.; Veissier, L.; Louchet-Chauvet, A.; Le Gouët, J.-L.; Chanelière, T.

    2018-05-01

    In Er3 +:Y2SiO5 , we demonstrate the selective optical addressing of the Y89 3 + nuclear spins through their superhyperfine coupling with the Er3 + electronic spins possessing large Landé g factors. We experimentally probe the electron-nuclear spin mixing with photon echo techniques and validate our model. The site-selective optical addressing of the Y3 + nuclear spins is designed by adjusting the magnetic field strength and orientation. This constitutes an important step towards the realization of long-lived solid-state qubits optically addressed by telecom photons.

  12. Low-complexity approximations to maximum likelihood MPSK modulation classification

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2004-01-01

    We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.

  13. Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.

  14. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  15. Maximum-likelihood block detection of noncoherent continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1993-01-01

    This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.

  16. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    PubMed

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  17. Maximum likelihood clustering with dependent feature trees

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.

  18. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  19. Three-dimensional motions in the Sculptor dwarf galaxy as a glimpse of a new era

    NASA Astrophysics Data System (ADS)

    Massari, D.; Breddels, M. A.; Helmi, A.; Posti, L.; Brown, A. G. A.; Tolstoy, E.

    2018-02-01

    The three-dimensional motions of stars in small galaxies beyond our own are minute, yet they are crucial for understanding the nature of gravity and dark matter1,2. Even for the dwarf galaxy Sculptor—one of the best-studied systems, which is inferred to be strongly dark matter dominated3,4—there are conflicting reports5-7 on its mean motion around the Milky Way, and the three-dimensional internal motions of its stars have never been measured. Here, we present precise proper motions of Sculptor's stars based on data from the Gaia mission8 and Hubble Space Telescope. Our measurements show that Sculptor moves around the Milky Way on a high-inclination elongated orbit that takes it much further out than previously thought. For Sculptor's internal velocity dispersions, we find σR = 11.5 ± 4.3 km s-1 and σT = 8.5 ± 3.2 km s-1 along the projected radial and tangential directions. Thus, the stars in our sample move preferentially on radial orbits as quantified by the anisotropy parameter, which we find to be β 0.8 6-0.83+0.12 at a location beyond the core radius. Taken at face value, this high radial anisotropy requires abandoning conventional models9 for Sculptor's mass distribution. Our sample is dominated by metal-rich stars and for these we find βM R 0.9 5-0.27+0.04—a value consistent with multi-component spherical models where Sculptor is embedded in a cuspy dark halo10, as might be expected for cold dark matter.

  20. Planetary entry, descent, and landing technologies

    NASA Astrophysics Data System (ADS)

    Pichkhadze, K.; Vorontsov, V.; Polyakov, A.; Ivankov, A.; Taalas, P.; Pellinen, R.; Harri, A.-M.; Linkin, V.

    2003-04-01

    Martian meteorological lander (MML) is intended for landing on the Martian surface in order to monitor the atmosphere at landing point for one Martian year. MMLs shall become the basic elements of a global network of meteorological mini-landers, observing the dynamics of changes of the atmospheric parameters on the Red Planet. The MML main scientific tasks are as follows: (1) Study of vertical structure of the Martian atmosphere throughout the MML descent; (2) On-surface meteorological observations for one Martian year. One of the essential factors influencing the lander's design is its entry, descent, and landing (EDL) sequence. During Phase A of the MML development, five different options for the lander's design were carefully analyzed. All of these options ensure the accomplishment of the above-mentioned scientific tasks with high effectiveness. CONCEPT A (conventional approach): Two lander options (with a parachute system + airbag and an inflatable airbrake + airbag) were analyzed. They are similar in terms of fulfilling braking phases and completely analogous in landing by means of airbags. CONCEPT B (innovative approach): Three lander options were analyzed. The distinguishing feature is the presence of inflatable braking units (IBU) in their configurations. SELECTED OPTION (innovative approach): Incorporating a unique design approach and modern technologies, the selected option of the lander represents a combination of the options analyzed in the framework of Concept B study. Currently, the selected lander option undergoes systems testing (Phase D1). Several MMLs can be delivered to Mars in frameworks of various missions as primary or piggybacking payload: (1) USA-led "Mars Scout" (2007); (2) France-led "NetLander" (2007/2009); (3) Russia-led "Mars-Deimos-Phobos sample return" (2007); (4) Independent mission (currently under preliminary study); etc.

  1. Refined diagnostic criteria and classification of mast cell leukemia (MCL) and myelomastocytic leukemia (MML): a consensus proposal

    PubMed Central

    Valent, P.; Sotlar, K.; Sperr, W. R.; Escribano, L.; Yavuz, S.; Reiter, A.; George, T. I.; Kluin-Nelemans, H. C.; Hermine, O.; Butterfield, J. H.; Hägglund, H.; Ustun, C.; Hornick, J. L.; Triggiani, M.; Radia, D.; Akin, C.; Hartmann, K.; Gotlib, J.; Schwartz, L. B.; Verstovsek, S.; Orfao, A.; Metcalfe, D. D.; Arock, M.; Horny, H.-P.

    2014-01-01

    Mast cell leukemia (MCL), the leukemic manifestation of systemic mastocytosis (SM), is characterized by leukemic expansion of immature mast cells (MCs) in the bone marrow (BM) and other internal organs; and a poor prognosis. In a subset of patients, circulating MCs are detectable. A major differential diagnosis to MCL is myelomastocytic leukemia (MML). Although criteria for both MCL and MML have been published, several questions remain concerning terminologies and subvariants. To discuss open issues, the EU/US-consensus group and the European Competence Network on Mastocytosis (ECNM) launched a series of meetings and workshops in 2011–2013. Resulting discussions and outcomes are provided in this article. The group recommends that MML be recognized as a distinct condition defined by mastocytic differentiation in advanced myeloid neoplasms without evidence of SM. The group also proposes that MCL be divided into acute MCL and chronic MCL, based on the presence or absence of C-Findings. In addition, a primary (de novo) form of MCL should be separated from secondary MCL that typically develops in the presence of a known antecedent MC neoplasm, usually aggressive SM (ASM) or MC sarcoma. For MCL, an imminent prephase is also proposed. This prephase represents ASM with rapid progression and 5%–19% MCs in BM smears, which is generally accepted to be of prognostic significance. We recommend that this condition be termed ASM in transformation to MCL (ASM-t). The refined classification of MCL fits within and extends the current WHO classification; and should improve prognostication and patient selection in practice as well as in clinical trials. PMID:24675021

  2. Dynamic Hydrostatic Pressure Regulates Nucleus Pulposus Phenotypic Expression and Metabolism in a Cell Density-Dependent Manner.

    PubMed

    Shah, Bhranti S; Chahine, Nadeen O

    2018-02-01

    Dynamic hydrostatic pressure (HP) loading can modulate nucleus pulposus (NP) cell metabolism, extracellular matrix (ECM) composition, and induce transformation of notochordal NP cells into mature phenotype. However, the effects of varying cell density and dynamic HP magnitude on NP phenotype and metabolism are unknown. This study examined the effects of physiological magnitudes of HP loading applied to bovine NP cells encapsulated within three-dimensional (3D) alginate beads. Study 1: seeding density (1 M/mL versus 4 M/mL) was evaluated in unloaded and loaded (0.1 MPa, 0.1 Hz) conditions. Study 2: loading magnitude (0, 0.1, and 0.6 MPa) applied at 0.1 Hz to 1 M/mL for 7 days was evaluated. Study 1: 4 M/mL cell density had significantly lower adenosine triphosphate (ATP), glycosaminoglycan (GAG) and collagen content, and increased lactate dehydrogenase (LDH). HP loading significantly increased ATP levels, and expression of aggrecan, collagen I, keratin-19, and N-cadherin in HP loaded versus unloaded groups. Study 2: aggrecan expression increased in a dose dependent manner with HP magnitude, whereas N-cadherin and keratin-19 expression were greatest in low HP loading compared to unloaded. Overall, the findings of the current study indicate that cell seeding density within a 3D construct is a critical variable influencing the mechanobiological response of NP cells to HP loading. NP mechanobiology and phenotypic expression was also found to be dependent on the magnitude of HP loading. These findings suggest that HP loading and culture conditions of NP cells may require complex optimization for engineering an NP replacement tissue.

  3. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  4. Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Ramsay, J. O.

    1980-01-01

    Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)

  5. ATAC Autocuer Modeling Analysis.

    DTIC Science & Technology

    1981-01-01

    the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of

  6. Epidemiologic programs for computers and calculators. A microcomputer program for multiple logistic regression by unconditional and conditional maximum likelihood methods.

    PubMed

    Campos-Filho, N; Franco, E L

    1989-02-01

    A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.

  7. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  8. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  9. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  10. Forbidden coherent transfer observed between two realizations of quasiharmonic spin systems

    NASA Astrophysics Data System (ADS)

    Bertaina, S.; Yue, G.; Dutoit, C.-E.; Chiorescu, I.

    2017-07-01

    The multilevel system Mn 2 + 55 is used to generate two pseudoharmonic level systems, as representations of the same electronic sextuplet at different nuclear spin projections. The systems are coupled using a forbidden nuclear transition induced by the crystalline anisotropy. We demonstrate Rabi oscillations between the two representations in conditions similar to two coupled pseudoharmonic quantum oscillators. Rabi oscillations are performed at a detuned pumping frequency which matches the energy difference between electronuclear states of different oscillators. We measure a coupling stronger than the decoherence rate to indicate the possibility of fast information exchange between the systems.

  11. Multiple-Quantum Transitions and Charge-Induced Decoherence of Donor Nuclear Spins in Silicon

    NASA Astrophysics Data System (ADS)

    Franke, David P.; Pflüger, Moritz P. D.; Itoh, Kohei M.; Brandt, Martin S.

    2017-06-01

    We study single- and multiquantum transitions of the nuclear spins of an ensemble of ionized arsenic donors in silicon and find quadrupolar effects on the coherence times, which we link to fluctuating electrical field gradients present after the application of light and bias voltage pulses. To determine the coherence times of superpositions of all orders in the 4-dimensional Hilbert space, we use a phase-cycling technique and find that, when electrical effects were allowed to decay, these times scale as expected for a fieldlike decoherence mechanism such as the interaction with surrounding Si 29 nuclear spins.

  12. Quadrature formula for evaluating left bounded Hadamard type hypersingular integrals

    NASA Astrophysics Data System (ADS)

    Bichi, Sirajo Lawan; Eshkuvatov, Z. K.; Nik Long, N. M. A.; Okhunov, Abdurahim

    2014-12-01

    Left semi-bounded Hadamard type Hypersingular integral (HSI) of the form H(h,x) = 1/π √{1+x/1-x }∫-1 **1√{1-t/1+t }h(t)/(t-x)2 dt,x∈(-1.1), Where h(t) is a smooth function is considered. The automatic quadrature scheme (AQS) is constructed by approximating the density function h(t) by the truncated Chebyshev polynomials of the fourth kind. Numerical results revealed that the proposed AQS is highly accurate when h(t) is choosing to be the polynomial and rational functions. The results are in line with the theoretical findings.

  13. Demonstration of Single-Barium-Ion Sensitivity for Neutrinoless Double-Beta Decay Using Single-Molecule Fluorescence Imaging

    NASA Astrophysics Data System (ADS)

    McDonald, A. D.; Jones, B. J. P.; Nygren, D. R.; Adams, C.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; Gómez-Cadenas, J. J.; González-Díaz, D.; Gutiérrez, R. M.; Guenette, R.; Hafidi, K.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Johnston, S.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; Monrabal, F.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Repond, J.; Renner, J.; Riordan, S.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Simón, A.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.; NEXT Collaboration

    2018-03-01

    A new method to tag the barium daughter in the double-beta decay of Xe 136 is reported. Using the technique of single molecule fluorescent imaging (SMFI), individual barium dication (Ba++ ) resolution at a transparent scanning surface is demonstrated. A single-step photobleach confirms the single ion interpretation. Individual ions are localized with superresolution (˜2 nm ), and detected with a statistical significance of 12.9 σ over backgrounds. This lays the foundation for a new and potentially background-free neutrinoless double-beta decay technology, based on SMFI coupled to high pressure xenon gas time projection chambers.

  14. US Traffic Fatalities, 1985–2014, and Their Relationship to Medical Marijuana Laws

    PubMed Central

    Mauro, Christine M.; Wall, Melanie M.; Kim, June H.; Cerdá, Magdalena; Keyes, Katherine M.; Hasin, Deborah S.; Galea, Sandro; Martins, Silvia S.

    2017-01-01

    Objectives. To determine the association of medical marijuana laws (MMLs) with traffic fatality rates. Methods. Using data from the 1985–2014 Fatality Analysis Reporting System, we examined the association between MMLs and traffic fatalities in multilevel regression models while controlling for contemporaneous secular trends. We examined this association separately for each state enacting MMLs. We also evaluated the association between marijuana dispensaries and traffic fatalities. Results. On average, MML states had lower traffic fatality rates than non-MML states. Medical marijuana laws were associated with immediate reductions in traffic fatalities in those aged 15 to 24 and 25 to 44 years, and with additional yearly gradual reductions in those aged 25 to 44 years. However, state-specific results showed that only 7 states experienced post-MML reductions. Dispensaries were also associated with traffic fatality reductions in those aged 25 to 44 years. Conclusions. Both MMLs and dispensaries were associated with reductions in traffic fatalities, especially among those aged 25 to 44 years. State-specific analysis showed heterogeneity of the MML–traffic fatalities association, suggesting moderation by other local factors. These findings could influence policy decisions on the enactment or repealing of MMLs and how they are implemented. PMID:27997245

  15. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  16. PAMLX: a graphical user interface for PAML.

    PubMed

    Xu, Bo; Yang, Ziheng

    2013-12-01

    This note announces pamlX, a graphical user interface/front end for the paml (for Phylogenetic Analysis by Maximum Likelihood) program package (Yang Z. 1997. PAML: a program package for phylogenetic analysis by maximum likelihood. Comput Appl Biosci. 13:555-556; Yang Z. 2007. PAML 4: Phylogenetic analysis by maximum likelihood. Mol Biol Evol. 24:1586-1591). pamlX is written in C++ using the Qt library and communicates with paml programs through files. It can be used to create, edit, and print control files for paml programs and to launch paml runs. The interface is available for free download at http://abacus.gene.ucl.ac.uk/software/paml.html.

  17. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  18. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  19. Maximum likelihood phase-retrieval algorithm: applications.

    PubMed

    Nahrstedt, D A; Southwell, W H

    1984-12-01

    The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.

  20. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  1. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    PubMed

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  2. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  3. On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2005-01-01

    Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…

  4. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  5. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  6. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  7. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  8. A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Mayberry, Paul W.

    A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…

  9. The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice

    The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…

  10. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  11. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  12. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  13. An evaluation of percentile and maximum likelihood estimators of weibull paremeters

    Treesearch

    Stanley J. Zarnoch; Tommy R. Dell

    1985-01-01

    Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...

  14. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  15. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  16. Unclassified Publications of Lincoln Laboratory, 1 January - 31 December 1990. Volume 16

    DTIC Science & Technology

    1990-12-31

    Apr. 1990 ADA223419 Hopped Communication Systems with Nonuniform Hopping Distributions 880 Bistatic Radar Cross Section of a Fenn, A.J. 2 May1990...EXPERIMENT JA-6241 MS-8424 LUNAR PERTURBATION MAXIMUM LIKELIHOOD ALGORITHM JA-6241 JA-6467 LWIR SPECTRAL BAND MAXIMUM LIKELIHOOD ESTIMATOR JA-6476 MS-8466

  17. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  18. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  19. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  20. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  1. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  2. Comparison of standard maximum likelihood classification and polytomous logistic regression used in remote sensing

    Treesearch

    John Hogland; Nedret Billor; Nathaniel Anderson

    2013-01-01

    Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...

  3. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  4. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  5. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations.

    PubMed

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-06-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.

  6. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  7. Effect of Biaxial Strain on the Phase Transitions of Ca (Fe1 -xCox)2As2

    NASA Astrophysics Data System (ADS)

    Böhmer, A. E.; Sapkota, A.; Kreyssig, A.; Bud'ko, S. L.; Drachuck, G.; Saunders, S. M.; Goldman, A. I.; Canfield, P. C.

    2017-03-01

    We study the effect of applied strain as a physical control parameter for the phase transitions of Ca (Fe1 -xCox)2As2 using resistivity, magnetization, x-ray diffraction, and Fe 57 Mössbauer spectroscopy. Biaxial strain, namely, compression of the basal plane of the tetragonal unit cell, is created through firm bonding of samples to a rigid substrate via differential thermal expansion. This strain is shown to induce a magnetostructural phase transition in originally paramagnetic samples, and superconductivity in previously nonsuperconducting ones. The magnetostructural transition is gradual as a consequence of using strain instead of pressure or stress as a tuning parameter.

  8. Stochastic gravitational wave background from newly born massive magnetars: The role of a dense matter equation of state

    NASA Astrophysics Data System (ADS)

    Cheng, Quan; Zhang, Shuang-Nan; Zheng, Xiao-Ping

    2017-04-01

    Newly born massive magnetars are generally considered to be produced by binary neutron star (NS) mergers, which could give rise to short gamma-ray bursts (SGRBs). The strong magnetic fields and fast rotation of these magnetars make them promising sources for gravitational wave (GW) detection using ground based GW interferometers. Based on the observed masses of Galactic NS-NS binaries, by assuming different equations of state (EOSs) of dense matter, we investigate the stochastic gravitational wave background (SGWB) produced by an ensemble of newly born massive magnetars. The massive magnetar formation rate is estimated through: (i) the SGRB formation rate (hereafter entitled as MFR1); (ii) the NS-NS merger rate (hereafter entitled as MFR2). We find that for massive magnetars with masses Mr m =2.4743 M⊙ , if EOS CDDM2 is assumed, the resultant SGWBs may be detected by the future Einstein Telescope (ET) even for MFR1 with minimal local formation rate, and for MFR2 with a local merger rate ρ˙c o(0 )≲10 Mpc-3 Myr-1 . However, if EOS BSk21 is assumed, the SGWB may be detectable by the ET for MFR1 with the maximal local formation rate. Moreover, the background spectra show cutoffs at about 350 Hz in the case of EOS BSk21, and at 124 Hz for CDDM2, respectively. We suggest that if the cutoff at ˜100 Hz in the background spectrum from massive magnetars could be detected, then the quark star EOS CDDM2 seems to be favorable. Moreover, the EOSs, which present relatively small TOV maximum masses, would be excluded.

  9. State Medical Marijuana Laws and the Prevalence of Opioids Detected Among Fatally Injured Drivers

    PubMed Central

    Santaella-Tenorio, Julian; Mauro, Christine; Wrobel, Julia; Cerdà, Magdalena; Keyes, Katherine M.; Hasin, Deborah; Martins, Silvia S.; Li, Guohua

    2016-01-01

    Objectives. To assess the association between medical marijuana laws (MMLs) and the odds of a positive opioid test, an indicator for prior use. Methods. We analyzed 1999–2013 Fatality Analysis Reporting System (FARS) data from 18 states that tested for alcohol and other drugs in at least 80% of drivers who died within 1 hour of crashing (n = 68 394). Within-state and between-state comparisons assessed opioid positivity among drivers crashing in states with an operational MML (i.e., allowances for home cultivation or active dispensaries) versus drivers crashing in states before a future MML was operational. Results. State-specific estimates indicated a reduction in opioid positivity for most states after implementation of an operational MML, although none of these estimates were significant. When we combined states, we observed no significant overall association (odds ratio [OR] = 0.79; 95% confidence interval [CI] = 0.61, 1.03). However, age-stratified analyses indicated a significant reduction in opioid positivity for drivers aged 21 to 40 years (OR = 0.50; 95% CI = 0.37, 0.67; interaction P < .001). Conclusions. Operational MMLs are associated with reductions in opioid positivity among 21- to 40-year-old fatally injured drivers and may reduce opioid use and overdose. PMID:27631755

  10. Approximated maximum likelihood estimation in multifractal random walks

    NASA Astrophysics Data System (ADS)

    Løvsletten, O.; Rypdal, M.

    2012-04-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  11. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  12. 12-mode OFDM transmission using reduced-complexity maximum likelihood detection.

    PubMed

    Lobato, Adriana; Chen, Yingkan; Jung, Yongmin; Chen, Haoshuo; Inan, Beril; Kuschnerov, Maxim; Fontaine, Nicolas K; Ryf, Roland; Spinnler, Bernhard; Lankl, Berthold

    2015-02-01

    We report the transmission of 163-Gb/s MDM-QPSK-OFDM and 245-Gb/s MDM-8QAM-OFDM transmission over 74 km of few-mode fiber supporting 12 spatial and polarization modes. A low-complexity maximum likelihood detector is employed to enhance the performance of a system impaired by mode-dependent loss.

  13. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  14. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  15. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  16. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  17. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  18. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  19. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  20. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    NASA Astrophysics Data System (ADS)

    Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.

    2017-10-01

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ˜21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  1. Ciprofibrate therapy in patients with hypertriglyceridemia and low high density lipoprotein (HDL)-cholesterol: greater reduction of non-HDL cholesterol in subjects with excess body weight (The CIPROAMLAT study)

    PubMed Central

    Aguilar-Salinas, Carlos A; Assis-Luores-Vale, Andréia; Stockins, Benjamín; Rengifo, Hector Mario; Filho, José Dondici; Neto, Abrahão Afiune; Rabelo, Lísia Marcílio; Torres, Kerginaldo Paulo; Oliveira, José Egídio Paulo de; Machado, Carlos Alberto; Reyes, Eliana; Saavedra, Victor; Florenzano, Fernando; Hernández, Ma Victoria; Jiménez, Sergio Hernandez; Ramírez, Erika; Vazquez, Cuauhtémoc; Salinas, Saul; Hernández, Ismael; Medel, Octavio; Moreno, Ricardo; Lugo, Paula; Alvarado, Ricardo; Mehta, Roopa; Gutierrez, Victor; Gómez Pérez, Francisco J

    2004-01-01

    Background Hypertriglyceridemia in combination with low HDL cholesterol levels is a risk factor for cardiovascular disease. Our objective was to evaluate the efficacy of ciprofibrate for the treatment of this form of dyslipidemia and to identify factors associated with better treatment response. Methods Multicenter, international, open-label study. Four hundred and thirty seven patients were included. The plasma lipid levels at inclusion were fasting triglyceride concentrations between 1.6–3.9 mM/l and HDL cholesterol ≤ 1.05 mM/l for women and ≤ 0.9 mM/l for men. The LDL cholesterol was below 4.2 mM/l. All patients received ciprofibrate 100 mg/d. Efficacy and safety parameters were assessed at baseline and at the end of the treatment. The primary efficacy parameter of the study was percentage change in triglycerides from baseline. Results After 4 months, plasma triglyceride concentrations were decreased by 44% (p < 0.001). HDL cholesterol concentrations were increased by 10% (p < 0.001). Non-HDL cholesterol was decreased by 19%. A greater HDL cholesterol response was observed in lean patients (body mass index < 25 kg/m2) compared to the rest of the population (8.2 vs 19.7%, p < 0.001). In contrast, cases with excess body weight had a larger decrease in non-HDL cholesterol levels (-20.8 vs -10.8%, p < 0.001). There were no significant complications resulting from treatment with ciprofibrate. Conclusions Ciprofibrate is efficacious for the correction of hypertriglyceridemia / low HDL cholesterol. A greater decrease in non-HDL cholesterol was found among cases with excess body weight. The mechanism of action of ciprofibrate may be influenced by the pathophysiology of the disorder being treated. PMID:15272932

  2. Intrinsic and scattering attenuation of high-frequency S-waves in the central part of the External Dinarides

    NASA Astrophysics Data System (ADS)

    Majstorović, Josipa; Belinić, Tena; Namjesnik, Dalija; Dasović, Iva; Herak, Davorka; Herak, Marijan

    2017-09-01

    The central part of the External Dinarides (CED) is a geologically and tectonically complex region formed in the collision between the Adriatic microplate and the European plate. In this study, the contributions of intrinsic and scattering attenuation ( Q i - 1 and Q sc - 1 , respectively) to the total S-wave attenuation were calculated for the first time. The multiple lapse-time window analysis (MLTWA method), based on the assumptions of multiple isotropic scattering in a homogeneous medium with uniformly distributed scatterers, was applied to seismograms of 450 earthquakes recorded at six seismic stations. Selected events have hypocentral distances between 40 and 90 km with local magnitudes between 1.5 and 4.7. The analysis was performed over 11 frequency bands with central frequencies between 1.5 and 16 Hz. Results show that the seismic albedo of the studied area is less than 0.5 and Q i - 1 > Q sc - 1 at all central frequencies and for all stations. These imply that the intrinsic attenuation dominates over scattering attenuation in the whole study area. Calculated total S-wave and expected coda wave attenuation for CED are in a very good agreement with the ones measured in previous studies using the coda normalization and the coda-Q methods. All estimated attenuation factors decrease with increasing frequency. The intrinsic attenuation for CED is among the highest observed elsewhere, which could be due to the highly fractured and fluid-filled carbonates in the upper crust. The scattering and the total S-wave attenuation for CED are close to the average values obtained in other studies performed worldwide. In particular, good agreement of frequency dependence of total attenuation in CED and in the regions that contributed most strong-motion records for ground motion prediction equations used in PSHA in Croatia indicates that those were well chosen and applicable to this area as far as their attenuation properties are concerned.

  3. Experimental determination of iron isotope fractionations among Fe aq 2 + -FeSaq-Mackinawite at low temperatures: Implications for the rock record

    NASA Astrophysics Data System (ADS)

    Wu, Lingling; Druschel, Greg; Findlay, Alyssa; Beard, Brian L.; Johnson, Clark M.

    2012-07-01

    The Fe isotope fractionation factors among aqueous ferrous iron (Fe aq 2 +), aqueous FeS clusters (FeSaq), and nanoparticulate mackinawite under neutral and mildly acidic and alkaline pH conditions have been determined using the three-isotope method. Combined voltammetric analysis and geochemical modeling were used to determine the Fe speciation in the experimental systems. The equilibrium 56Fe/54Fe fractionation factor at 20 °C and pH 7 has been determined to be -0.32 ± 0.29 (2σ)‰ between Fe aq 2 + (minor FeSaq also present in the experiment) and mackinawite. This fractionation factor was essentially constant when pH was changed to 6 or 8. When equal molarity of HS- and Fe aq 2 + were added to the system, however, the isotopic fractionation at pH 7 changed to -0.64 ± 0.36 (2σ)‰, correlating with a significant increase in the proportion of FeHS+ and FeSaq. These results highlight a more important role of aqueous Fe-S speciation in the equilibrium Fe isotope fractionation factor than recognized in previous studies. The isotopic fractionation remained constant when temperature was increased from 20 °C to 35 °C for fractionation factors between Fe aq 2 + , and mackinawite and between dominantly FeHS+ and mackinawite. Synthesis experiments similar to those of Butler et al. (2005) and Guilbaud et al. (2010) at pH 4 show consistent results: over time, the aqueous Fe-mackinawite fractionation decreases but even after 38 days of aging the fractionation factor is far from the equilibrium value inferred using the three-isotope method. In contrast, at near-neutral pH the fractionation factor for the synthesis experiment reached the equilibrium value in 38 days. These differences are best explained by noting that at low pH the FeS mackinawite particles coarsen more rapidly via particle aggregation, which limits isotopic exchange, whereas at higher pH mackinawite aggregation is limited, and Fe isotope exchange occurs more rapidly, converging on the equilibrium value. These results suggest that mackinawite formed in natural environments at near-neutral or alkaline pH are unlikely to retain kinetic isotope fractionations, but are more likely to reflect equilibrium isotope compositions. This in turn has important implications for interpreting iron isotope compositions of Fe sulfides in natural systems.

  4. Autobalanced Ramsey Spectroscopy

    NASA Astrophysics Data System (ADS)

    Sanner, Christian; Huntemann, Nils; Lange, Richard; Tamm, Christian; Peik, Ekkehard

    2018-01-01

    We devise a perturbation-immune version of Ramsey's method of separated oscillatory fields. Spectroscopy of an atomic clock transition without compromising the clock's accuracy is accomplished by actively balancing the spectroscopic responses from phase-congruent Ramsey probe cycles of unequal durations. Our simple and universal approach eliminates a wide variety of interrogation-induced line shifts often encountered in high precision spectroscopy, among them, in particular, light shifts, phase chirps, and transient Zeeman shifts. We experimentally demonstrate autobalanced Ramsey spectroscopy on the light shift prone Yb+ 171 electric octupole optical clock transition and show that interrogation defects are not turned into clock errors. This opens up frequency accuracy perspectives below the 10-18 level for the Yb+ system and for other types of optical clocks.

  5. Electric and Magnetic Dipole Strength at Low Energy

    NASA Astrophysics Data System (ADS)

    Sieja, K.

    2017-08-01

    A low-energy enhancement of radiative strength functions was deduced from recent experiments in several mass regions of nuclei, which is believed to impact considerably the calculated neutron capture rates. In this Letter we investigate the behavior of the low-energy γ -ray strength of the Sc 44 isotope, for the first time taking into account both electric and magnetic dipole contributions obtained coherently in the same theoretical approach. The calculations are performed using the large-scale shell-model framework in a full 1 ℏω s d -p f -g d s model space. Our results corroborate previous theoretical findings for the low-energy enhancement of the M 1 strength but show quite different behavior for the E 1 strength.

  6. Unified Description of Dynamics of a Repulsive Two-Component Fermi Gas

    NASA Astrophysics Data System (ADS)

    Grochowski, Piotr T.; Karpiuk, Tomasz; Brewczyk, Mirosław; Rzążewski, Kazimierz

    2017-11-01

    We study a binary spin mixture of a zero-temperature repulsively interacting Li 6 atoms using both the atomic-orbital and density-functional approaches. The gas is initially prepared in a configuration of two magnetic domains and we determine the frequency of the spin-dipole oscillations which are emerging after the repulsive barrier, initially separating the domains, is removed. We find, in agreement with recent experiment [G. Valtolina et al., Nat. Phys. 13, 704 (2017), 10.1038/nphys4108], the occurrence of a ferromagnetic instability in an atomic gas while the interaction strength between different spin states is increased, after which the system becomes ferromagnetic. The ferromagnetic instability is preceded by the softening of the spin-dipole mode.

  7. Maximum likelihood estimation for Cox's regression model under nested case-control sampling.

    PubMed

    Scheike, Thomas H; Juul, Anders

    2004-04-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.

  8. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  9. DSN telemetry system performance with convolutionally coded data using operational maximum-likelihood convolutional decoders

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.

    1976-01-01

    The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.

  10. The Construct Validity of Higher Order Structure-of-Intellect Abilities in a Battery of Tests Emphasizing the Product of Transformations: A Confirmatory Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; And Others

    1982-01-01

    A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)

  11. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  12. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  13. On non-parametric maximum likelihood estimation of the bivariate survivor function.

    PubMed

    Prentice, R L

    The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.

  14. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  15. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. Wemore » find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.« less

  16. Lod scores for gene mapping in the presence of marker map uncertainty.

    PubMed

    Stringham, H M; Boehnke, M

    2001-07-01

    Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.

  17. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  18. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  19. Customized Versus Noncustomized Sound Therapy for Treatment of Tinnitus: A Randomized Crossover Clinical Trial.

    PubMed

    Mahboubi, Hossein; Haidar, Yarah M; Kiumehr, Saman; Ziai, Kasra; Djalilian, Hamid R

    2017-10-01

    To determine the effectiveness of a customized sound therapy and compare its effectiveness to that of masking with broadband noise. Subjects were randomized to receive either customized sound therapy or broadband noise for 2 hours per day for 3 months and then switched to the other treatment after a washout period. The outcome variables were tinnitus loudness (scored 0-10), Tinnitus Handicap Inventory (THI), Beck Anxiety Inventory (BAI), minimum masking levels (MML), and residual inhibition (RI). Eighteen subjects completed the study. Mean age was 53 ± 11 years, and mean tinnitus duration was 118 ± 99 months. With customized sound therapy, mean loudness decreased from 6.4 ± 2.0 to 4.9 ± 1.9 ( P = .001), mean THI decreased from 42.8 ± 21.6 to 31.5 ± 20.3 ( P < .001), mean BAI decreased from 10.6 ± 10.9 to 8.3 ± 9.9 ( P = .01), and MML decreased from 22.3 ± 11.6 dB SL to 17.2 ± 10.6 dB SL ( P = .005). After 3 months of broadband noise therapy, only BAI and, to a lesser degree, MML decreased ( P = .003 and .04, respectively). Customized sound therapy can decrease the loudness and THI scores of tinnitus patients, and the results may be superior to broadband noise.

  20. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  1. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.

    1981-01-01

    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  2. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  3. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach.

    PubMed

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.

  4. Precision Mass Measurements of Cr-6358 : Nuclear Collectivity Towards the N =40 Island of Inversion

    NASA Astrophysics Data System (ADS)

    Mougeot, M.; Atanasov, D.; Blaum, K.; Chrysalidis, K.; Goodacre, T. Day; Fedorov, D.; Fedosseev, V.; George, S.; Herfurth, F.; Holt, J. D.; Lunney, D.; Manea, V.; Marsh, B.; Neidherr, D.; Rosenbusch, M.; Rothe, S.; Schweikhard, L.; Schwenk, A.; Seiffert, C.; Simonis, J.; Stroberg, S. R.; Welker, A.; Wienholtz, F.; Wolf, R. N.; Zuber, K.

    2018-06-01

    The neutron-rich isotopes Cr 58 - 63 were produced for the first time at the ISOLDE facility and their masses were measured with the ISOLTRAP spectrometer. The new values are up to 300 times more precise than those in the literature and indicate significantly different nuclear structure from the new mass-surface trend. A gradual onset of deformation is found in this proton and neutron midshell region, which is a gateway to the second island of inversion around N =40 . In addition to comparisons with density-functional theory and large-scale shell-model calculations, we present predictions from the valence-space formulation of the ab initio in-medium similarity renormalization group, the first such results for open-shell chromium isotopes.

  5. High-Fidelity Preservation of Quantum Information During Trapped-Ion Transport

    NASA Astrophysics Data System (ADS)

    Kaufmann, Peter; Gloger, Timm F.; Kaufmann, Delia; Johanning, Michael; Wunderlich, Christof

    2018-01-01

    A promising scheme for building scalable quantum simulators and computers is the synthesis of a scalable system using interconnected subsystems. A prerequisite for this approach is the ability to faithfully transfer quantum information between subsystems. With trapped atomic ions, this can be realized by transporting ions with quantum information encoded into their internal states. Here, we measure with high precision the fidelity of quantum information encoded into hyperfine states of a Yb171 + ion during ion transport in a microstructured Paul trap. Ramsey spectroscopy of the ion's internal state is interleaved with up to 4000 transport operations over a distance of 280 μ m each taking 12.8 μ s . We obtain a state fidelity of 99.9994 (-7+6) % per ion transport.

  6. Frequency Measurements of Superradiance from the Strontium Clock Transition

    NASA Astrophysics Data System (ADS)

    Norcia, Matthew A.; Cline, Julia R. K.; Muniz, Juan A.; Robinson, John M.; Hutson, Ross B.; Goban, Akihisa; Marti, G. Edward; Ye, Jun; Thompson, James K.

    2018-04-01

    We present the first characterization of the spectral properties of superradiant light emitted from the ultranarrow, 1-mHz-linewidth optical clock transition in an ensemble of cold Sr 87 atoms. Such a light source has been proposed as a next-generation active atomic frequency reference, with the potential to enable high-precision optical frequency references to be used outside laboratory environments. By comparing the frequency of our superradiant source to that of a state-of-the-art cavity-stabilized laser and optical lattice clock, we observe a fractional Allan deviation of 6.7 (1 )×10-16 at 1 s of averaging, establish absolute accuracy at the 2-Hz (4 ×10-15 fractional frequency) level, and demonstrate insensitivity to key environmental perturbations.

  7. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  8. Statistical Bias in Maximum Likelihood Estimators of Item Parameters.

    DTIC Science & Technology

    1982-04-01

    34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC

  9. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    ERIC Educational Resources Information Center

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  10. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  11. 7Li-induced reaction on natMo: A study of complete versus incomplete fusion

    NASA Astrophysics Data System (ADS)

    Kumar, Deepak; Maiti, Moumita; Lahiri, Susanta

    2017-07-01

    Background: Several investigations on the complete-incomplete fusion (CF-ICF) dynamics of α -cluster well-bound nuclei have been contemplated above the Coulomb barrier (˜4 -7 MeV/nucleon) in recent years. It is therefore expected to observe significant ICF over CF in the reactions induced by a weakly bound α -cluster nucleus at slightly above the barrier. Purpose: Study of the CF-ICF dynamics by measuring the populated residues in the weakly bound 7Li+natMo system at energies slightly above the Coulomb barrier to well above it. Method: In order to investigate CF-ICF in the loosely bound system, 7Li beam was bombarded on the natMo foils, separated by the aluminium (Al) catcher foils alternatively, within ˜3 -6.5 MeV/nucleon. Evaporation residues produced in each foil were identified by the off-line γ -ray spectrometry. Measured cross section data of the residues were compared with the theoretical model calculations based on the equilibrium (EQ) and pre-equilibrium (PEQ) reaction mechanisms. Results: The experimental cross section of Rh 101 m,100 ,99 m,97 ,Ru,9597,Tc 99 m,96 ,95 ,94 ,93 m+g , and 93mMo residues measured at various projectile energies were satisfactorily reproduced by the simplified coupled channel approach in comparison to single barrier penetration model calculation. Significant cross section enhancement in the α -emitting channels was observed compared to EQ and PEQ model calculations throughout observed energy region. The ICF process over CF was analyzed by comparing with EMPIRE. The increment of the incomplete fusion fraction was observed with increasing projectile energies. Conclusions: Theoretical model calculations reveal that the compound reaction mechanism is the major contributor to the production of residues in 7Li+natMo reaction. Theoretical evaluations substantiate the contribution of ICF over the CF in α -emitting channels. EMPIRE estimations shed light on its predictive capability of cross sections of the residues from the heavy-ion induced reactions.

  12. Composite Partial Likelihood Estimation Under Length-Biased Sampling, With Application to a Prevalent Cohort Study of Dementia

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing

    2013-01-01

    The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265

  13. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  14. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  15. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  16. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    PubMed

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  17. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  18. Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key

    PubMed Central

    Batchelder, William H.

    2014-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812

  19. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  20. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  1. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  2. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  3. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  4. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  5. Orbital Motion of Young Binaries in Ophiuchus and Upper Centaurus–Lupus

    NASA Astrophysics Data System (ADS)

    Schaefer, G. H.; Prato, L.; Simon, M.

    2018-03-01

    We present measurements of the orbital positions and flux ratios of 17 binary and triple systems in the Ophiuchus star-forming region and the Upper Centaurus–Lupus cluster based on adaptive optics imaging at the Keck Observatory. We report the detection of visual companions in MML 50 and MML 53 for the first time, as well as the possible detection of a third component in WSB 21. For six systems in our sample, our measurements provide a second orbital position following their initial discoveries over a decade ago. For eight systems with sufficient orbital coverage, we analyze the range of orbital solutions that fit the data. Ultimately, these observations will help provide the groundwork toward measuring precise masses for these pre-main-sequence stars and understanding the distribution of orbital parameters in young multiple systems.

  6. Shape Evolution in Neutron-Rich Krypton Isotopes Beyond N =60 : First Spectroscopy of Kr,10098

    NASA Astrophysics Data System (ADS)

    Flavigny, F.; Doornenbal, P.; Obertelli, A.; Delaroche, J.-P.; Girod, M.; Libert, J.; Rodriguez, T. R.; Authelet, G.; Baba, H.; Calvet, D.; Château, F.; Chen, S.; Corsi, A.; Delbart, A.; Gheller, J.-M.; Giganon, A.; Gillibert, A.; Lapoux, V.; Motobayashi, T.; Niikura, M.; Paul, N.; Roussé, J.-Y.; Sakurai, H.; Santamaria, C.; Steppenbeck, D.; Taniuchi, R.; Uesaka, T.; Ando, T.; Arici, T.; Blazhev, A.; Browne, F.; Bruce, A.; Carroll, R.; Chung, L. X.; Cortés, M. L.; Dewald, M.; Ding, B.; Franchoo, S.; Górska, M.; Gottardo, A.; Jungclaus, A.; Lee, J.; Lettmann, M.; Linh, B. D.; Liu, J.; Liu, Z.; Lizarazo, C.; Momiyama, S.; Moschner, K.; Nagamine, S.; Nakatsuka, N.; Nita, C.; Nobs, C. R.; Olivier, L.; Orlandi, R.; Patel, Z.; Podolyák, Zs.; Rudigier, M.; Saito, T.; Shand, C.; Söderström, P. A.; Stefan, I.; Vaquero, V.; Werner, V.; Wimmer, K.; Xu, Z.

    2017-06-01

    We report on the first γ -ray spectroscopy of low-lying states in neutron-rich Kr,10098 isotopes obtained from Rb,10199(p ,2 p ) reactions at ˜220 MeV /nucleon . A reduction of the 21+ state energies beyond N =60 demonstrates a significant increase of deformation, shifted in neutron number compared to the sharper transition observed in strontium and zirconium isotopes. State-of-the-art beyond-mean-field calculations using the Gogny D1S interaction predict level energies in good agreement with experimental results. The identification of a low-lying (02+, 22+) state in Kr 98 provides the first experimental evidence of a competing configuration at low energy in neutron-rich krypton isotopes consistent with the oblate-prolate shape coexistence picture suggested by theory.

  7. Effect of Ferric Ions on Bioleaching of Pentlandite Concentrate

    NASA Astrophysics Data System (ADS)

    Li, Qian; Lai, Huimin; Yang, Yongbin; Xu, Bin; Jiang, Tao; Zhang, Yaping

    The intensified effects of ferric phosphate and ferric sulfate as nutrient and oxidant on the bioleaching of pentlandite concentrate with Acidithiobacillus ferrooxidans and Sulfobacillus thermosulfidooxidans were studied. The results showed that the nickel leaching rate was enhanced continuously with FePO4 or Fe2(SO4)3 added in certain extent, but declined at excess. For A. ferrooxidans, the optimum additive amount of Fe2(SO4)3 was 6.63mM/L and the nickel leaching rate reached 71.76%. Compared with Fe2(SO4)3, the optimum additive amount of FePO4 was 26.52mM/L for both strains. For A. ferrooxidans and S. thermosulfidooxidans, the nickel leaching rate could increase to 98.06% and 98.11% which was 1.83 times and 1.55 times of the leachig rate of blank test, respectively.

  8. Lepton Flavor Violation Induced by a Neutral Scalar at Future Lepton Colliders

    NASA Astrophysics Data System (ADS)

    Dev, P. S. Bhupal; Mohapatra, Rabindra N.; Zhang, Yongchao

    2018-06-01

    Many new physics scenarios beyond standard model often necessitate the existence of a (light) neutral scalar H , which might couple to the charged leptons in a flavor violating way, while evading all existing constraints. We show that such scalars could be effectively produced at future lepton colliders, either on shell or off shell depending on their mass, and induce lepton flavor violating (LFV) signals, i.e., e+e-→ℓα±ℓβ∓(+H ) with α ≠β . We find that a large parameter space of the scalar mass and the LFV couplings can be probed well beyond the current low-energy constraints in the lepton sector. In particular, a scalar-loop induced explanation of the long-standing muon g -2 anomaly can be directly tested in the on-shell mode.

  9. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  10. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    NASA Technical Reports Server (NTRS)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  11. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  12. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  13. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

    PubMed Central

    Meyer, Karin

    2016-01-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681

  14. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  15. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  16. Vector Antenna and Maximum Likelihood Imaging for Radio Astronomy

    DTIC Science & Technology

    2016-03-05

    Maximum Likelihood Imaging for Radio Astronomy Mary Knapp1, Frank Robey2, Ryan Volz3, Frank Lind3, Alan Fenn2, Alex Morris2, Mark Silver2, Sarah Klein2...haystack.mit.edu Abstract1— Radio astronomy using frequencies less than ~100 MHz provides a window into non-thermal processes in objects ranging from planets...observational astronomy . Ground-based observatories including LOFAR [1], LWA [2], [3], MWA [4], and the proposed SKA-Low [5], [6] are improving access to

  17. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  18. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  19. 77 FR 20452 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-04

    ... of Acceptance, Waiver and Consent No. EAF0401000001 (MML Distributors, LLC) (Oct. 2005); NASD Letter of Acceptance, Waiver and Consent No. EAF0401240001 (AFSG Securities Corp.) (Oct. 2005); FINRA Letter...

  20. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures

    PubMed Central

    Theobald, Douglas L.; Wuttke, Deborah S.

    2008-01-01

    Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907

  1. Exact short-time height distribution for the flat Kardar-Parisi-Zhang interface

    NASA Astrophysics Data System (ADS)

    Smith, Naftali R.; Meerson, Baruch

    2018-05-01

    We determine the exact short-time distribution -lnPf(" close=")H ,t )">H ,t =Sf(H )/√{t } of the one-point height H =h (x =0 ,t ) of an evolving 1 +1 Kardar-Parisi-Zhang (KPZ) interface for flat initial condition. This is achieved by combining (i) the optimal fluctuation method, (ii) a time-reversal symmetry of the KPZ equation in 1 +1 dimension, and (iii) the recently determined exact short-time height distribution -lnPst(H ) of the latter, one encounters two branches: an analytic and a nonanalytic. The analytic branch is nonphysical beyond a critical value of H where a second-order dynamical phase transition occurs. Here we show that, remarkably, it is the analytic branch of Sst(H ) which determines the large-deviation function Sf(H ) of the flat interface via a simple mapping Sf(H )=2-3 /2SstCLAIM (CLinical Accounting InforMation)--an XML-based data exchange standard for connecting electronic medical record systems to patient accounting systems.

    PubMed

    Guo, Jinqiu; Takada, Akira; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Takahashi, Kiwamu; Daimon, Hiroyuki; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki

    2005-08-01

    With the evolving and diverse electronic medical record (EMR) systems, there appears to be an ever greater need to link EMR systems and patient accounting systems with a standardized data exchange format. To this end, the CLinical Accounting InforMation (CLAIM) data exchange standard was developed. CLAIM is subordinate to the Medical Markup Language (MML) standard, which allows the exchange of medical data among different medical institutions. CLAIM uses eXtensible Markup Language (XML) as a meta-language. The current version, 2.1, inherited the basic structure of MML 2.x and contains two modules including information related to registration, appointment, procedure and charging. CLAIM 2.1 was implemented successfully in Japan in 2001. Consequently, it was confirmed that CLAIM could be used as an effective data exchange format between EMR systems and patient accounting systems.

  2. Quasielastic charged-current neutrino scattering in the scaling model with relativistic effective mass

    NASA Astrophysics Data System (ADS)

    Ruiz Simo, I.; Martinez-Consentino, V. L.; Amaro, J. E.; Ruiz Arriola, E.

    2018-06-01

    We use a recent scaling analysis of the quasielastic electron scattering data from C 12 to predict the quasielastic charge-changing neutrino scattering cross sections within an uncertainty band. We use a scaling function extracted from a selection of the (e ,e') cross section data, and an effective nucleon mass inspired by the relativistic mean-field model of nuclear matter. The corresponding superscaling analysis with relativistic effective mass (SuSAM*) describes a large amount of the electron data lying inside a phenomenological quasielastic band. The effective mass incorporates the enhancement of the transverse current produced by the relativistic mean field. The scaling function incorporates nuclear effects beyond the impulse approximation, in particular meson-exchange currents and short-range correlations producing tails in the scaling function. Besides its simplicity, this model describes the neutrino data as reasonably well as other more sophisticated nuclear models.

  3. Validation of a Korean Version of the Tinnitus Handicap Questionnaire

    PubMed Central

    Yoo, Ik Won; Hwang, Sun Jin; Hwang, Soon Young

    2015-01-01

    Objectives The goal of the present study was to evaluate the reliability and validity of the Korean version of the tinnitus handicap questionnaire (THQ-K). Methods A total of 60 patients were included in this study. Patients responded to the THQ-K, the tinnitus handicap inventory (THI), Beck's depression index (BDI), and the visual analogue scale (VAS) for loudness and pitch, loudness match, and minimum masking level (MML) test were performed. Results Internal consistency of the THQ-K was examined using Cronbach coefficient alpha. Cronbach alpha was 0.96. The THQ-K showed a significant correlation with THI, BDI, VAS for distress, and VAS for loudness, but no significant correlation with psychoacoustic measurement of tinnitus, such as loudness match, pitch match, and MML. Conclusion The THQ-K is a reliable and valid test for evaluating the degree of handicap due to tinnitus for both research and clinical use. PMID:26330911

  4. Entanglement-Enhanced Phase Estimation without Prior Phase Information

    NASA Astrophysics Data System (ADS)

    Colangelo, G.; Martin Ciurana, F.; Puentes, G.; Mitchell, M. W.; Sewell, R. J.

    2017-06-01

    We study the generation of planar quantum squeezed (PQS) states by quantum nondemolition (QND) measurement of an ensemble of Rb 87 atoms with a Poisson distributed atom number. Precise calibration of the QND measurement allows us to infer the conditional covariance matrix describing the Fy and Fz components of the PQS states, revealing the dual squeezing characteristic of PQS states. PQS states have been proposed for single-shot phase estimation without prior knowledge of the likely values of the phase. We show that for an arbitrary phase, the generated PQS states can give a metrological advantage of at least 3.1 dB relative to classical states. The PQS state also beats, for most phase angles, single-component-squeezed states generated by QND measurement with the same resources and atom number statistics. Using spin squeezing inequalities, we show that spin-spin entanglement is responsible for the metrological advantage.

  5. Spin-Imbalanced Quasi-Two-Dimensional Fermi Gases

    NASA Astrophysics Data System (ADS)

    Ong, W.; Cheng, Chingyun; Arakelyan, I.; Thomas, J. E.

    2015-03-01

    We measure the density profiles for a Fermi gas of Li 6 containing N1 spin-up atoms and N2 spin-down atoms, confined in a quasi-two-dimensional geometry. The spatial profiles are measured as a function of spin imbalance N2/N1 and interaction strength, which is controlled by means of a collisional (Feshbach) resonance. The measured cloud radii and central densities are in disagreement with mean-field Bardeen-Cooper-Schrieffer theory for a true two-dimensional system. We find that the data for normal-fluid mixtures are reasonably well fit by a simple two-dimensional polaron model of the free energy. Not predicted by the model is a phase transition to a spin-balanced central core, which is observed above a critical value of N2/N1. Our observations provide important benchmarks for predictions of the phase structure of quasi-two-dimensional Fermi gases.

  6. Three-component fermions with surface Fermi arcs in tungsten carbide

    NASA Astrophysics Data System (ADS)

    Ma, J.-Z.; He, J.-B.; Xu, Y.-F.; Lv, B. Q.; Chen, D.; Zhu, W.-L.; Zhang, S.; Kong, L.-Y.; Gao, X.; Rong, L.-Y.; Huang, Y.-B.; Richard, P.; Xi, C.-Y.; Choi, E. S.; Shao, Y.; Wang, Y.-L.; Gao, H.-J.; Dai, X.; Fang, C.; Weng, H.-M.; Chen, G.-F.; Qian, T.; Ding, H.

    2018-04-01

    Topological Dirac and Weyl semimetals not only host quasiparticles analogous to the elementary fermionic particles in high-energy physics, but also have a non-trivial band topology manifested by gapless surface states, which induce exotic surface Fermi arcs1,2. Recent advances suggest new types of topological semimetal, in which spatial symmetries protect gapless electronic excitations without high-energy analogues3-11. Here, using angle-resolved photoemission spectroscopy, we observe triply degenerate nodal points near the Fermi level of tungsten carbide with space group P 6 ¯m 2 (no. 187), in which the low-energy quasiparticles are described as three-component fermions distinct from Dirac and Weyl fermions. We further observe topological surface states, whose constant-energy contours constitute pairs of `Fermi arcs' connecting to the surface projections of the triply degenerate nodal points, proving the non-trivial topology of the newly identified semimetal state.

  7. Surface-Induced Near-Field Scaling in the Knudsen Layer of a Rarefied Gas

    NASA Astrophysics Data System (ADS)

    Gazizulin, R. R.; Maillet, O.; Zhou, X.; Cid, A. Maldonado; Bourgeois, O.; Collin, E.

    2018-01-01

    We report on experiments performed within the Knudsen boundary layer of a low-pressure gas. The noninvasive probe we use is a suspended nanoelectromechanical string, which interacts with He 4 gas at cryogenic temperatures. When the pressure P is decreased, a reduction of the damping force below molecular friction ∝P had been first reported in Phys. Rev. Lett. 113, 136101 (2014), 10.1103/PhysRevLett.113.136101 and never reproduced since. We demonstrate that this effect is independent of geometry, but dependent on temperature. Within the framework of kinetic theory, this reduction is interpreted as a rarefaction phenomenon, carried through the boundary layer by a deviation from the usual Maxwell-Boltzmann equilibrium distribution induced by surface scattering. Adsorbed atoms are shown to play a key role in the process, which explains why room temperature data fail to reproduce it.

  8. Maximum Likelihood Analysis in the PEN Experiment

    NASA Astrophysics Data System (ADS)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  9. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  10. A maximum likelihood algorithm for genome mapping of cytogenetic loci from meiotic configuration data.

    PubMed Central

    Reyes-Valdés, M H; Stelly, D M

    1995-01-01

    Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226

  11. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  12. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  14. DECONV-TOOL: An IDL based deconvolution software package

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Landsman, W. B.

    1992-01-01

    There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.

  15. F-8C adaptive flight control laws

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Harvey, C. A.; Stein, G.; Carlson, D. N.; Hendrick, R. C.

    1977-01-01

    Three candidate digital adaptive control laws were designed for NASA's F-8C digital flyby wire aircraft. Each design used the same control laws but adjusted the gains with a different adaptative algorithm. The three adaptive concepts were: high-gain limit cycle, Liapunov-stable model tracking, and maximum likelihood estimation. Sensors were restricted to conventional inertial instruments (rate gyros and accelerometers) without use of air-data measurements. Performance, growth potential, and computer requirements were used as criteria for selecting the most promising of these candidates for further refinement. The maximum likelihood concept was selected primarily because it offers the greatest potential for identifying several aircraft parameters and hence for improved control performance in future aircraft application. In terms of identification and gain adjustment accuracy, the MLE design is slightly superior to the other two, but this has no significant effects on the control performance achievable with the F-8C aircraft. The maximum likelihood design is recommended for flight test, and several refinements to that design are proposed.

  16. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less

  17. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  18. Design and development of an international clinical data exchange system: the international layer function of the Dolphin Project

    PubMed Central

    Zhou, Tian-shu; Chu, Jian; Araki, Kenji; Yoshihara, Hiroyuki

    2011-01-01

    Objective At present, most clinical data are exchanged between organizations within a regional system. However, people traveling abroad may need to visit a hospital, which would make international exchange of clinical data very useful. Background Since 2007, a collaborative effort to achieve clinical data sharing has been carried out at Zhejiang University in China and Kyoto University and Miyazaki University in Japan; each is running a regional clinical information center. Methods An international layer system named Global Dolphin was constructed with several key services, sharing patients' health information between countries using a medical markup language (MML). The system was piloted with 39 test patients. Results The three regions above have records for 966 000 unique patients, which are available through Global Dolphin. Data exchanged successfully from Japan to China for the 39 study patients include 1001 MML files and 152 images. The MML files contained 197 free text-type paragraphs that needed human translation. Discussion The pilot test in Global Dolphin demonstrates that patient information can be shared across countries through international health data exchange. To achieve cross-border sharing of clinical data, some key issues had to be addressed: establishment of a super directory service across countries; data transformation; and unique one—language translation. Privacy protection was also taken into account. The system is now ready for live use. Conclusion The project demonstrates a means of achieving worldwide accessibility of medical data, by which the integrity and continuity of patients' health information can be maintained. PMID:21571747

  19. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  1. Inferring Phylogenetic Networks Using PhyloNet.

    PubMed

    Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay

    2018-07-01

    PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.

  2. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    PubMed

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  3. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  4. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  5. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  6. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  7. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  8. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  9. A maximum likelihood convolutional decoder model vs experimental data comparison

    NASA Technical Reports Server (NTRS)

    Chen, R. Y.

    1979-01-01

    This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.

  10. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  11. Component, Context, and Manufacturing Model Library (C2M2L)

    DTIC Science & Technology

    2012-11-01

    123 5.1 MML Population and Web Service Interface...104 Table 41. Relevant Questions with Associated Web Services...the models, and implementing web services that provide semantically aware programmatic access to the models, including implementing the MS&T

  12. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C [Sante Fe, NM

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  13. Phylogenetic place of guinea pigs: no support of the rodent-polyphyly hypothesis from maximum-likelihood analyses of multiple protein sequences.

    PubMed

    Cao, Y; Adachi, J; Yano, T; Hasegawa, M

    1994-07-01

    Graur et al.'s (1991) hypothesis that the guinea pig-like rodents have an evolutionary origin within mammals that is separate from that of other rodents (the rodent-polyphyly hypothesis) was reexamined by the maximum-likelihood method for protein phylogeny, as well as by the maximum-parsimony and neighbor-joining methods. The overall evidence does not support Graur et al.'s hypothesis, which radically contradicts the traditional view of rodent monophyly. This work demonstrates that we must be careful in choosing a proper method for phylogenetic inference and that an argument based on a small data set (with respect to the length of the sequence and especially the number of species) may be unstable.

  14. Task Performance with List-Mode Data

    NASA Astrophysics Data System (ADS)

    Caucci, Luca

    This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.

  15. Improved relocatable over-the-horizon radar detection and tracking using the maximum likelihood adaptive neural system algorithm

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.

    1998-07-01

    An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.

  16. Serum magnesium level is associated with type 2 diabetes in women with a history of gestational diabetes mellitus: the Korea National Diabetes Program study.

    PubMed

    Yang, Sae Jeong; Hwang, Soon Young; Baik, Sei Hyun; Lee, Kwan Woo; Nam, Moon Suk; Park, Yong Soo; Woo, Jeong Taek; Kim, Young Seol; Park, Sunmin; Park, So-Young; Yim, Chang Hoon; Yoon, Hyun Koo; Kim, Sung-Hoon

    2014-01-01

    Gestational diabetes mellitus (GDM) is a strong predictor of postpartum prediabetes and transition to overt type 2 diabetes (T2DM). Although many reports indicate that low magnesium is correlated with deteriorated glucose tolerance, the association between postpartum serum magnesium level and the risk for T2DM in women with a history of GDM has not been evaluated. We analyzed postpartum serum magnesium levels and development of prediabetes and T2DM in women with prior GDM according to American Diabetes Association (ADA) criteria using the Korean National Diabetes Program (KNDP) GDM cohort. During a mean follow-up of 15.6 ± 2.0 months after screening, 116 women were divided into three groups according to glucose tolerance status. Ultimately, eight patients (6.9%) were diagnosed with T2DM, 59 patients (50.9%) with prediabetes, and 49 patients (42.2%) with normal glucose tolerance (NGT) after follow-up. The T2DM group had the lowest serum magnesium level (0.65 [0.63-0.68] mM/L) in the postpartum period, but there was no significant difference between the prediabetes group (0.70 [0.65-0.70] mM/L) and the NGT group (0.70 [0.65-0.70] mM/L) (P=0.073) Multiple logistic regression analysis showed that postpartum HOMA-IR was a significant predictor of both prediabetes and T2DM. Moreover, we found that postpartum serum magnesium level was also a possible predictor for T2DM development. Serum magnesium level in the postpartum period may be a possible predictor for T2DM development in women with a history of GDM.

  17. SPH calculations of asteroid disruptions: The role of pressure dependent failure models

    NASA Astrophysics Data System (ADS)

    Jutzi, Martin

    2015-03-01

    We present recent improvements of the modeling of the disruption of strength dominated bodies using the Smooth Particle Hydrodynamics (SPH) technique. The improvements include an updated strength model and a friction model, which are successfully tested by a comparison with laboratory experiments. In the modeling of catastrophic disruptions of asteroids, a comparison between old and new strength models shows no significant deviation in the case of targets which are initially non-porous, fully intact and have a homogeneous structure (such as the targets used in the study by Benz and Asphaug, 1999). However, for many cases (e.g. initially partly or fully damaged targets and rubble-pile structures) we find that it is crucial that friction is taken into account and the material has a pressure dependent shear strength. Our investigations of the catastrophic disruption threshold Q D * as a function of target properties and target sizes up to a few 100 km show that a fully damaged target modeled without friction has a Q D * which is significantly (5-10 times) smaller than in the case where friction is included. When the effect of the energy dissipation due to compaction (pore crushing) is taken into account as well, the targets become even stronger ( Q D * is increased by a factor of 2-3). On the other hand, cohesion is found to have an negligible effect at large scales and is only important at scales ≲ 1 km. Our results show the relative effects of strength, friction and porosity on the outcome of collisions among small (≲ 1000 km) bodies. These results will be used in a future study to improve existing scaling laws for the outcome of collisions (e.g. Leinhardt and Stewart, 2012).

  18. Effects of physical and nutritional stress conditions during mycelial growth on conidial germination speed, adhesion to host cuticle, and virulence of Metarhizium anisopliae, an entomopathogenic fungus.

    PubMed

    Rangel, Drauzio E N; Alston, Diane G; Roberts, Donald W

    2008-11-01

    Growth under stress may influence pathogen virulence and other phenotypic traits. Conidia of the entomopathogenic fungus Metarhizium anisopliae var. anisopliae (isolate ARSEF 2575) were produced under different stress conditions and then examined for influences on in vitro conidial germination speed, adhesion to the insect cuticle, and virulence to an insect host, Tenebrio molitor. Conidia were produced under non-stress conditions [on potato-dextrose agar plus 1gl(-1) yeast extract (PDAY; control)], or under the following stress conditions: osmotic (PDAY+sodium chloride or potassium chloride, 0.6 or 0.8m); oxidative [(PDAY+hydrogen peroxide, 5mm) or UV-A (irradiation of mycelium on PDAY)]; heat shock (heat treatment of mycelium on PDAY at 45 degrees C, 40min); and nutritive [minimal medium (MM) with no carbon source, or on MM plus 3gl(-1) lactose (MML)]. Conidia were most virulent (based on mortality at 3d) and had the fastest germination rates when produced on MML, followed by MM. In addition, conidial adhesion to host cuticle was greatest when the conidia were produced on MML. Media with high osmolarity (0.8m) produced conidia with slightly elevated virulence and faster germination rates than conidia produced on the control medium (PDAY), but this trend did not hold for media with the lower osmolarity, (0.6m). Conidia produced from mycelium irradiated with UV-A while growing on PDAY had somewhat elevated virulence levels similar to that of conidia produced on MM, but their germination rate was not increased. Hydrogen peroxide and heat shock treatments did not alter virulence. These results demonstrate that the germination, adhesion and virulence of M. anisopliae conidia can be strongly influenced by culture conditions (including stresses) during production of the conidia.

  19. Purification, crystal structure and antimicrobial activity of phenazine-1-carboxamide produced by a growth-promoting biocontrol bacterium, Pseudomonas aeruginosa MML2212.

    PubMed

    Shanmugaiah, V; Mathivanan, N; Varghese, B

    2010-02-01

    To purify and characterize an antimicrobial compound produced by a biocontrol bacterium, Pseudomonas aeruginosa MML2212, and evaluate its activity against rice pathogens, Rhizoctonia solani and Xanthomonas oryzae pv. oryzae. Pseudomonas aeruginosa strain MML2212 isolated from the rice rhizosphere with wide-spectrum antimicrobial activity was cultured in Kings'B broth using a fermentor for 36 h. The extracellular metabolites were isolated from the fermented broth using ethyl acetate extraction and purified by two-step silica-gel column chromatography. Three fractions were separated, of which a major compound was obtained in pure state as yellow needles. It was crystallized after dissolving with chloroform followed by slow evaporation. It is odourless with a melting point of 220-222 degrees C. It was soluble in most of the organic solvents and poorly soluble in water. The molecular mass of purified compound was estimated as 223.3 by mass spectral analysis. Further, it was characterized by IR, (1)H and (13)C NMR spectral analyses. The crystal structure of the compound was elucidated for the first time by X-ray diffraction study and deposited in the Cambridge Crystallographic Data Centre (http://www.ccde.com.ac.uk) with the accession no. CCDC 617344. The crystal compound was undoubtedly identified as phenazine-1-carboxamide (PCN) with the empirical formula of C(13)H(9)N(3)O. As this is the first report on the crystal structure of PCN, it provides additional information to the structural chemistry. Furthermore, the present study reports the antimicrobial activity of purified PCN on major rice pathogens, R. solani and X. oryzae pv. oryzae. Therefore, the PCN can be developed as an ideal agrochemical candidate for the control of both sheath blight and bacterial leaf blight diseases of rice.

  20. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  1. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    PubMed

    Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  2. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  3. Land cover mapping after the tsunami event over Nanggroe Aceh Darussalam (NAD) province, Indonesia

    NASA Astrophysics Data System (ADS)

    Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Alias, A. N.; Mohd. Saleh, N.; Wong, C. J.; Surbakti, M. S.

    2008-03-01

    Remote sensing offers an important means of detecting and analyzing temporal changes occurring in our landscape. This research used remote sensing to quantify land use/land cover changes at the Nanggroe Aceh Darussalam (Nad) province, Indonesia on a regional scale. The objective of this paper is to assess the changed produced from the analysis of Landsat TM data. A Landsat TM image was used to develop land cover classification map for the 27 March 2005. Four supervised classifications techniques (Maximum Likelihood, Minimum Distance-to- Mean, Parallelepiped and Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier) were performed to the satellite image. Training sites and accuracy assessment were needed for supervised classification techniques. The training sites were established using polygons based on the colour image. High detection accuracy (>80%) and overall Kappa (>0.80) were achieved by the Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier in this study. This preliminary study has produced a promising result. This indicates that land cover mapping can be carried out using remote sensing classification method of the satellite digital imagery.

  4. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    PubMed

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  5. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis

    PubMed Central

    van de Schoot, Rens; Hox, Joop

    2014-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827

  6. Testing students’ e-learning via Facebook through Bayesian structural equation modeling

    PubMed Central

    Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019

  7. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  8. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  9. Relativistic semiempirical-core-potential calculations in Ca+,Sr+ , and Ba+ ions on Lagrange meshes

    NASA Astrophysics Data System (ADS)

    Filippin, Livio; Schiffmann, Sacha; Dohet-Eraly, Jérémy; Baye, Daniel; Godefroid, Michel

    2018-01-01

    Relativistic atomic structure calculations are carried out in alkaline-earth-metal ions using a semiempirical-core-potential approach. The systems are partitioned into frozen-core electrons and an active valence electron. The core orbitals are defined by a Dirac-Hartree-Fock calculation using the grasp2k package. The valence electron is described by a Dirac-like Hamiltonian involving a core-polarization potential to simulate the core-valence electron correlation. The associated equation is solved with the Lagrange-mesh method, which is an approximate variational approach having the form of a mesh calculation because of the use of a Gauss quadrature to calculate matrix elements. Properties involving the low-lying metastable D 3 /2 ,5 /2 2 states of Ca+, Sr+, and Ba+ are studied, such as polarizabilities, one- and two-photon decay rates, and lifetimes. Good agreement is found with other theory and observation, which is promising for further applications in alkalilike systems.

  10. Sub-Coulomb 3He transfer and its use to extract three-particle asymptotic normalization coefficients

    NASA Astrophysics Data System (ADS)

    Avila, M. L.; Baby, L. T.; Belarge, J.; Keeley, N.; Kemper, K. W.; Koshchiy, E.; Kuchera, A. N.; Rogachev, G. V.; Rusek, K.; Santiago-Gonzalez, D.

    2018-01-01

    Data for the 13C(6Li,t )16O reaction, obtained in inverse kinematics at a 13C incident energy of 7.72 MeV, are presented. A distorted wave Born approximation (DWBA) analysis was used to extract spectroscopic factors and asymptotic normalization coefficients (ANCs) for the 〈" close="〉6Li∣3He +3H 〉">16O∣13C +3He overlaps, subject to the assumption of a fixed No evidence of reduced collectivity in Coulomb-excited Sn isotopes

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Saxena, M.; Doornenbal, P.; Jhingan, A.; Banerjee, A.; Bhowmik, R. K.; Dutt, S.; Garg, R.; Joshi, C.; Mishra, V.; Napiorkowski, P. J.; Prajapati, S.; Söderström, P.-A.; Kumar, N.; Wollersheim, H.-J.

    2017-11-01

    In a series of Coulomb excitation experiments the first excited 2+ states in semimagic Sn 112 ,116 ,118 ,120 ,122 ,124 isotopes were excited using a 58Ni beam at safe Coulomb energy. The B (E 2 ; 0+→2+) values were determined with high precision (˜3 %) relative to 58Ni projectile excitation. These results disagree with previously reported B (E 2 ↑) values [A. Jungclaus et al., Phys. Lett. B 695, 110 (2011)., 10.1016/j.physletb.2010.11.012] extracted from Doppler-shift attenuation lifetime measurements, whereas the reported mass dependence of B (E 2 ↑) values is very similar to a recent Coulomb excitation study [J. M. Allmond et al., Phys. Rev. C 92, 041303(R) (2015), 10.1103/PhysRevC.92.041303]. The stable Sn isotopes, key nuclei in nuclear structure, show no evidence of reduced collectivity and we, thus, reconfirm the nonsymmetric behavior of reduced transition probabilities with respect to the midshell A =116 .

  11. Determination of residual fluoroquinolones in honey by liquid chromatography using metal chelate affinity chromatography.

    PubMed

    Yatsukawa, Yoh-Ichi; Ito, Hironobu; Matsuda, Takahiro; Nakamura, Munetomo; Watai, Masatoshi; Fujita, Kazuhiro

    2011-01-01

    A new analytical method for the simultaneous determination of seven fluoroquinolones, namely, norfloxacin, ciprofloxacin, danofloxacin, enrofloxacin, orbifloxacin, sarafloxacin, and difloxacin, especially in dark-colored honey, has been developed. Fluoroquinolone antibiotics were extracted from samples with MacIlvaine buffer solution (pH 4.0) containing EDTA disodium salt dihydrate. The extracts were treated with both a polymeric cartridge and a metal chelate affinity column preloaded with ferric ion (Fe3+). LC separation with fluorescence detection was performed at 40 degrees C using an Inertsil ODS-4 analytical column (150 x 4.6 mm, 3 microm). The mobile phase was composed of 20 mM/L citrate buffer solution (pH 3.1)-acetonitrile mixture (70 + 30, v/v) containing 1 mM/L sodium dodecyl sulfate. Lomefloxacin was used as an internal standard. The developed method was validated according to the criteria of European Commission Decision 2002/657/EC. Decision limits and detection capabilities were below 2.9 and 4.4 microg/kg, respectively.

  12. Single and double spin asymmetries for deeply virtual Compton scattering measured with CLAS and a longitudinally polarized proton target

    NASA Astrophysics Data System (ADS)

    Pisano, S.; Biselli, A.; Niccolai, S.; Seder, E.; Guidal, M.; Mirazita, M.; Adhikari, K. P.; Adikaram, D.; Amaryan, M. J.; Anderson, M. D.; Anefalos Pereira, S.; Avakian, H.; Ball, J.; Battaglieri, M.; Batourine, V.; Bedlinskiy, I.; Bosted, P.; Briscoe, B.; Brock, J.; Brooks, W. K.; Burkert, V. D.; Carlin, C.; Carman, D. S.; Celentano, A.; Chandavar, S.; Charles, G.; Colaneri, L.; Cole, P. L.; Compton, N.; Contalbrigo, M.; Cortes, O.; Crabb, D. G.; Crede, V.; D'Angelo, A.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Dupre, R.; Egiyan, H.; El Alaoui, A.; El Fassi, L.; Elouadrhiri, L.; Eugenio, P.; Fedotov, G.; Fegan, S.; Fersch, R.; Filippi, A.; Fleming, J. A.; Fradi, A.; Garillon, B.; Garçon, M.; Ghandilyan, Y.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Gohn, W.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guo, L.; Hafidi, K.; Hanretty, C.; Hattawy, M.; Hicks, K.; Holtrop, M.; Hughes, S. M.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Jenkins, D.; Jiang, X.; Jo, H. S.; Joo, K.; Joosten, S.; Keith, C. D.; Keller, D.; Kim, A.; Kim, W.; Klein, F. J.; Kubarovsky, V.; Kuhn, S. E.; Lenisa, P.; Livingston, K.; Lu, H. Y.; MacCormick, M.; MacGregor, I. J. D.; Mayer, M.; McKinnon, B.; Meekins, D. G.; Meyer, C. A.; Mokeev, V.; Montgomery, R. A.; Moody, C. I.; Munoz Camacho, C.; Nadel-Turonski, P.; Osipenko, M.; Ostrovidov, A. I.; Park, K.; Phelps, W.; Phillips, J. J.; Pogorelko, O.; Price, J. W.; Procureur, S.; Prok, Y.; Puckett, A. J. R.; Ripani, M.; Rizzo, A.; Rosner, G.; Rossi, P.; Roy, P.; Sabatié, F.; Salgado, C.; Schott, D.; Schumacher, R. A.; Skorodumina, I.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Sparveris, N.; Stepanyan, S.; Stoler, P.; Strauch, S.; Sytnik, V.; Tian, Ye; Tkachenko, S.; Turisini, M.; Ungaro, M.; Voutier, E.; Walford, N. K.; Watts, D. P.; Wei, X.; Weinstein, L. B.; Wood, M. H.; Zachariou, N.; Zana, L.; Zhang, J.; Zhao, Z. W.; Zonta, I.; CLAS Collaboration

    2015-03-01

    Single-beam, single-target, and double spin asymmetries for hard exclusive electroproduction of a photon on the proton e →p →→e'p'γ are presented. The data were taken at Jefferson Lab using the CEBAF large acceptance spectrometer and a longitudinally polarized NH3 14 target. The three asymmetries were measured in 165 four-dimensional kinematic bins, covering the widest kinematic range ever explored simultaneously for beam and target-polarization observables in the valence quark region. The kinematic dependences of the obtained asymmetries are discussed and compared to the predictions of models of generalized parton distributions. The measurement of three DVCS spin observables at the same kinematic points allows a quasi-model-independent extraction of the imaginary parts of the H and H ˜ Compton form factors, which give insight into the electric and axial charge distributions of valence quarks in the proton.

  13. Unconditional violation of the shot-noise limit in photonic quantum metrology

    NASA Astrophysics Data System (ADS)

    Slussarenko, Sergei; Weston, Morgan M.; Chrzanowski, Helen M.; Shalm, Lynden K.; Verma, Varun B.; Nam, Sae Woo; Pryde, Geoff J.

    2017-11-01

    Interferometric phase measurement is widely used to precisely determine quantities such as length, speed and material properties1-3. Without quantum correlations, the best phase sensitivity Δ ϕ achievable using n photons is the shot-noise limit, Δ ϕ =1 /√{n }. Quantum-enhanced metrology promises better sensitivity, but, despite theoretical proposals stretching back decades3,4, no measurement using photonic (that is, definite photon number) quantum states has truly surpassed the shot-noise limit. Instead, all such demonstrations, by discounting photon loss, detector inefficiency or other imperfections, have considered only a subset of the photons used. Here, we use an ultrahigh-efficiency photon source and detectors to perform unconditional entanglement-enhanced photonic interferometry. Sampling a birefringent phase shift, we demonstrate precision beyond the shot-noise limit without artificially correcting our results for loss and imperfections. Our results enable quantum-enhanced phase measurements at low photon flux and open the door to the next generation of optical quantum metrology advances.

  14. Cr6+-containing phases in the system CaO-Al2O3-CrO42--H2O at 23 °C

    NASA Astrophysics Data System (ADS)

    Pöllmann, Herbert; Auer, Stephan

    2012-01-01

    Synthesis and investigation of lamellar calcium aluminium hydroxy salts was performed to study the incorporation of chromate ions in the interlayer of lamellar calcium aluminium hydroxy salts. Different AFm-phases (calcium aluminate hydrate with alumina, ferric oxide, mono-anion phase) containing chromate were synthesized. These AFm-phases belong to the group of layered double hydroxides (LDHs). 3CaO·Al2O3·CaCrO4·nH2O and C3A·1/2Ca(OH)2·1/2CaCrO4·12H2O were obtained as pure phases and their different distinct interlayer water contents and properties determined. Solid solution of chromate-containing phases and tetracalcium-aluminate-hydrate (TCAH) were studied. The uptake of chromate into TCAH from solutions was proven. Chromate contents in solution decrease to <0.2 mg/l.

  15. Prognostic Determinants in Patients with Traumatic Pancreatic Injuries

    PubMed Central

    Hwang, Seong Youn

    2008-01-01

    The aim of this study was to identify factors that predict morbidity and mortality in patients with traumatic pancreatic injuries. A retrospective review was performed on 75 consecutive patients with traumatic pancreatic injuries admitted to the Emergency Medical Center at Masan Samsung Hospital and subsequently underwent laparotomy during the period January 2000 to December 2005. Overall mortality and morbidity rates were 13.3% and 49.3%, respectively. A multivariate regression analysis revealed that greater than 12 blood transfusions and an initial base deficit of less than -11 mM/L were the most important predictors of mortality (p<0.05). On the other hand, the most important predictors of morbidity were surgical complexity and an initial base deficit of less than -5.8 mM/L (p<0.01). These data suggests that early efforts to prevent shock and rapidly control of bleeding are most likely to improve the outcome in patients with traumatic pancreatic injuries. The severity of pancreatic injury per se influenced only morbidity. PMID:18303212

  16. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  17. Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,

  18. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  19. Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.

    PubMed

    McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J

    2018-04-01

    Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.

  1. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure

    PubMed Central

    Richards, V. M.; Dai, W.

    2014-01-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826

  2. Equalization of nonlinear transmission impairments by maximum-likelihood-sequence estimation in digital coherent receivers.

    PubMed

    Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro

    2010-03-01

    We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.

  3. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  4. The epoch state navigation filter. [for maximum likelihood estimates of position and velocity vectors

    NASA Technical Reports Server (NTRS)

    Battin, R. H.; Croopnick, S. R.; Edwards, J. A.

    1977-01-01

    The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.

  5. A 3D approximate maximum likelihood localization solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-09-23

    A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  6. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  7. Search for Point Sources of Ultra-High-Energy Cosmic Rays above 4.0 × 1019 eV Using a Maximum Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.

    2005-04-01

    We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.

  8. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  9. Using the β-binomial distribution to characterize forest health

    Treesearch

    S.J. Zarnoch; R.L. Anderson; R.M. Sheffield

    1995-01-01

    The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...

  10. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  11. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    ERIC Educational Resources Information Center

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  12. Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data

    ERIC Educational Resources Information Center

    Xi, Nuo; Browne, Michael W.

    2014-01-01

    A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…

  13. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  14. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  15. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history

    EPA Science Inventory

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...

  16. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.

  17. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  18. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    PubMed

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  19. Simulation-Based Evaluation of Hybridization Network Reconstruction Methods in the Presence of Incomplete Lineage Sorting

    PubMed Central

    Kamneva, Olga K; Rosenberg, Noah A

    2017-01-01

    Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378

  20. Free energy reconstruction from steered dynamics without post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin

    2010-09-20

    Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less

  1. Master teachers' responses to twenty literacy and science/mathematics practices in deaf education.

    PubMed

    Easterbrooks, Susan R; Stephenson, Brenda; Mertens, Donna

    2006-01-01

    Under a grant to improve outcomes for students who are deaf or hard of hearing awarded to the Association of College Educators--Deaf/Hard of Hearing, a team identified content that all teachers of students who are deaf and hard of hearing must understand and be able to teach. Also identified were 20 practices associated with content standards (10 each, literacy and science/mathematics). Thirty-seven master teachers identified by grant agents rated the practices on a Likert-type scale indicating the maximum benefit of each practice and maximum likelihood that they would use the practice, yielding a likelihood-impact analysis. The teachers showed strong agreement on the benefits and likelihood of use of the rated practices. Concerns about implementation of many of the practices related to time constraints and mixed-ability classrooms were themes of the reviews. Actions for teacher preparation programs were recommended.

  2. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  3. Maximum-likelihood estimation of parameterized wavefronts from multifocal data

    PubMed Central

    Sakamoto, Julia A.; Barrett, Harrison H.

    2012-01-01

    A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282

  4. Genome-Wide Linkage and Positional Association Analyses Identify Associations of Novel AFF3 and NTM Genes with Triglycerides: The GenSalt Study

    PubMed Central

    Li, Changwei; Bazzano, Lydia A.L.; Rao, Dabeeru C.; Hixson, James E.; He, Jiang; Gu, Dongfeng; Gu, Charles C.; Shimmin, Lawrence C.; Jaquish, Cashell E.; Schwander, Karen; Liu, De-Pei; Huang, Jianfeng; Lu, Fanghong; Cao, Jie; Chong, Shen; Lu, Xiangfeng; Kelly, Tanika N.

    2016-01-01

    We conducted a genome-wide linkage scan and positional association study to identify genes and variants influencing blood lipid levels among participants of the Genetic Epidemiology Network of Salt-Sensitivity (GenSalt) study. The GenSalt study was conducted among 1906 participants from 633 Han Chinese families. Lipids were measured from overnight fasting blood samples using standard methods. Multipoint quantitative trait genome-wide linkage scans were performed on the high-density lipoprotein, low-density lipoprotein, and log-transformed triglyceride phenotypes. Using dense panels of single nucleotide polymorphisms (SNPs), single-marker and gene-based association analyses were conducted to follow-up on promising linkage signals. Additive associations between each SNP and lipid phenotypes were tested using mixed linear regression models. Gene-based analyses were performed by combining P-values from single-marker analyses within each gene using the truncated product method (TPM). Significant associations were assessed for replication among 777 Asian participants of the Multi-ethnic Study of Atherosclerosis (MESA). Bonferroni correction was used to adjust for multiple testing. In the GenSalt study, suggestive linkage signals were identified at 2p11.2–2q12.1 [maximum multipoint LOD score (MML) = 2.18 at 2q11.2] and 11q24.3–11q25 (MML = 2.29 at 11q25) for the log-transformed triglyceride phenotype. Follow-up analyses of these two regions revealed gene-based associations of charged multivesicular body protein 3 (CHMP3), ring finger protein 103 (RNF103), AF4/FMR2 family, member 3 (AFF3), and neurotrimin (NTM ) with triglycerides (P = 4 × 10−4, 1.00 × 10−5, 2.00 × 10−5, and 1.00 × 10−7, respectively). Both the AFF3 and NTM triglyceride associations were replicated among MESA study participants (P = 1.00 × 10−7 and 8.00 × 10−5, respectively). Furthermore, NTM explained the linkage signal on chromosome 11. In conclusion, we identified novel genes associated with lipid phenotypes in linkage regions on chromosomes 2 and 11. PMID:25819087

  5. Students, Graduates, and Dropouts in the Labor Market, October 1975. Special Labor Force Report 199.

    ERIC Educational Resources Information Center

    Young, Anne McD.

    1976-01-01

    This report by the U.S. Department of Labor, Bureau of Statistics covers youth employment and education, and their interwoven causes and results. Numerous statistical charts and explanatory notes are included. Factors, such as age, race, sex and status, are analyzed. (MML)

  6. 77 FR 5489 - Identification of Human Cell Lines Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-03

    ...-01] Identification of Human Cell Lines Project AGENCY: National Institute of Standards and Technology... cell line samples as part of the Identification of Human Cell Lines Project. All data and corresponding... cell lines accepted on the NIST Applied Genetics Group Web site at http://www.nist.gov/mml/biochemical...

  7. A tree island approach to inferring phylogeny in the ant subfamily Formicinae, with especial reference to the evolution of weaving.

    PubMed

    Johnson, Rebecca N; Agapow, Paul-Michael; Crozier, Ross H

    2003-11-01

    The ant subfamily Formicinae is a large assemblage (2458 species (J. Nat. Hist. 29 (1995) 1037), including species that weave leaf nests together with larval silk and in which the metapleural gland-the ancestrally defining ant character-has been secondarily lost. We used sequences from two mitochondrial genes (cytochrome b and cytochrome oxidase 2) from 18 formicine and 4 outgroup taxa to derive a robust phylogeny, employing a search for tree islands using 10000 randomly constructed trees as starting points and deriving a maximum likelihood consensus tree from the ML tree and those not significantly different from it. Non-parametric bootstrapping showed that the ML consensus tree fit the data significantly better than three scenarios based on morphology, with that of Bolton (Identification Guide to the Ant Genera of the World, Harvard University Press, Cambridge, MA) being the best among these alternative trees. Trait mapping showed that weaving had arisen at least four times and possibly been lost once. A maximum likelihood analysis showed that loss of the metapleural gland is significantly associated with the weaver life-pattern. The graph of the frequencies with which trees were discovered versus their likelihood indicates that trees with high likelihoods have much larger basins of attraction than those with lower likelihoods. While this result indicates that single searches are more likely to find high- than low-likelihood tree islands, it also indicates that searching only for the single best tree may lose important information.

  8. Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.

    PubMed

    Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R

    2018-05-26

    Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  9. DSN telemetry system performance using a maximum likelihood convolutional decoder

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Kemp, R. P.

    1977-01-01

    Results are described of telemetry system performance testing using DSN equipment and a Maximum Likelihood Convolutional Decoder (MCD) for code rates 1/2 and 1/3, constraint length 7 and special test software. The test results confirm the superiority of the rate 1/3 over that of the rate 1/2. The overall system performance losses determined at the output of the Symbol Synchronizer Assembly are less than 0.5 db for both code rates. Comparison of the performance is also made with existing mathematical models. Error statistics of the decoded data are examined. The MCD operational threshold is found to be about 1.96 db.

  10. Multifrequency InSAR height reconstruction through maximum likelihood estimation of local planes parameters.

    PubMed

    Pascazio, Vito; Schirinzi, Gilda

    2002-01-01

    In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.

  11. Soft decoding a self-dual (48, 24; 12) code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  12. Effects of time-shifted data on flight determined stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Steers, S. T.; Iliff, K. W.

    1975-01-01

    Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.

  13. Minimum distance classification in remote sensing

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Landgrebe, D. A.

    1972-01-01

    The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.

  14. Maximum likelihood conjoint measurement of lightness and chroma.

    PubMed

    Rogers, Marie; Knoblauch, Kenneth; Franklin, Anna

    2016-03-01

    Color varies along dimensions of lightness, hue, and chroma. We used maximum likelihood conjoint measurement to investigate how lightness and chroma influence color judgments. Observers judged lightness and chroma of stimuli that varied in both dimensions in a paired-comparison task. We modeled how changes in one dimension influenced judgment of the other. An additive model best fit the data in all conditions except for judgment of red chroma where there was a small but significant interaction. Lightness negatively contributed to perception of chroma for red, blue, and green hues but not for yellow. The method permits quantification of lightness and chroma contributions to color appearance.

  15. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  16. Fitting distributions to microbial contamination data collected with an unequal probability sampling design.

    PubMed

    Williams, M S; Ebel, E D; Cao, Y

    2013-01-01

    The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.

  17. RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models.

    PubMed

    Stamatakis, Alexandros

    2006-11-01

    RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak

  18. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level

    PubMed Central

    Savalei, Victoria; Rhemtulla, Mijke

    2017-01-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371

  19. Determining crop residue type and class using satellite acquired data. M.S. Thesis Progress Report, Jun. 1990

    NASA Technical Reports Server (NTRS)

    Zhuang, Xin

    1990-01-01

    LANDSAT Thematic Mapper (TM) data for March 23, 1987 with accompanying ground truth data for the study area in Miami County, IN were used to determine crop residue type and class. Principle components and spectral ratioing transformations were applied to the LANDSAT TM data. One graphic information system (GIS) layer of land ownership was added to each original image as the eighth band of data in an attempt to improve classification. Maximum likelihood, minimum distance, and neural networks were used to classify the original, transformed, and GIS-enhanced remotely sensed data. Crop residues could be separated from one another and from bare soil and other biomass. Two types of crop residue and four classes were identified from each LANDSAT TM image. The maximum likelihood classifier performed the best classification for each original image without need of any transformation. The neural network classifier was able to improve the classification by incorporating a GIS-layer of land ownership as an eighth band of data. The maximum likelihood classifier was unable to consider this eighth band of data and thus, its results could not be improved by its consideration.

  20. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    PubMed

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  1. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  2. Multicenter evaluation of molecular and culture-dependent diagnostics for Shigella species and Entero-invasive Escherichia coli in the Netherlands.

    PubMed

    van den Beld, Maaike J C; Friedrich, Alexander W; van Zanten, Evert; Reubsaet, Frans A G; Kooistra-Smid, Mirjam A M D; Rossen, John W A

    2016-12-01

    An inter-laboratory collaborative trial for the evaluation of diagnostics for detection and identification of Shigella species and Entero-invasive Escherichia coli (EIEC) was performed. Sixteen Medical Microbiological Laboratories (MMLs) participated. MMLs were interviewed about their diagnostic methods and a sample panel, consisting of DNA-extracts and spiked stool samples with different concentrations of Shigella flexneri, was provided to each MML. The results of the trial showed an enormous variety in culture-dependent and molecular diagnostic techniques currently used among MMLs. Despite the various molecular procedures, 15 out of 16 MMLs were able to detect Shigella species or EIEC in all the samples provided, showing that the diversity of methods has no effect on the qualitative detection of Shigella flexneri. In contrast to semi quantitative analysis, the minimum and maximum values per sample differed by approximately five threshold cycles (Ct-value) between the MMLs included in the study. This indicates that defining a uniform Ct-value cut-off for notification to health authorities is not advisable. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Phylogenetically marking the limits of the genus Fusarium for post-Article 59 usage

    USDA-ARS?s Scientific Manuscript database

    Fusarium (Hypocreales, Nectriaceae) is one of the most important and systematically challenging groups of mycotoxigenic, plant pathogenic, and human pathogenic fungi. We conducted maximum likelihood (ML), maximum parsimony (MP) and Bayesian (B) analyses on partial nucleotide sequences of genes encod...

  4. Determining the linkage of disease-resistance genes to molecular markers: the LOD-SCORE method revisited with regard to necessary sample sizes.

    PubMed

    Hühn, M

    1995-05-01

    Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.

  5. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  6. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  7. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.

  8. Model-Driven Development of Interactive Multimedia Applications with MML

    NASA Astrophysics Data System (ADS)

    Pleuss, Andreas; Hussmann, Heinrich

    There is an increasing demand for high-quality interactive applications which combine complex application logic with a sophisticated user interface, making use of individual media objects like graphics, animations, 3D graphics, audio or video. Their development is still challenging as it requires the integration of software design, user interface design, and media design.

  9. Site Inspection Report for Former Nansemond Ordnance Depot, Suffolk, VA

    DTIC Science & Technology

    2012-01-01

    previously addressed, or there is no reason to suspect contamination was ever present at an MRS. Note: HHE MODULE RATING No Longer Required A D HHH HML...Required A D HHH HML MMM Combination Rating E HLL MML MLL F GLLL Evaluation Pending No Longer Required Alternative Module Ratings No Known or

  10. Resonant production of dark photons in positron beam dump experiments

    NASA Astrophysics Data System (ADS)

    Nardi, Enrico; Carvajal, Cristian D. R.; Ghoshal, Anish; Meloni, Davide; Raggi, Mauro

    2018-05-01

    Positrons beam dump experiments have unique features to search for very narrow resonances coupled superweakly to e+e- pairs. Due to the continued loss of energy from soft photon bremsstrahlung, in the first few radiation lengths of the dump a positron beam can continuously scan for resonant production of new resonances via e+ annihilation off an atomic e- in the target. In the case of a dark photon A' kinetically mixed with the photon, this production mode is of first order in the electromagnetic coupling α , and thus parametrically enhanced with respect to the O (α2)e+e-→γ A' production mode and to the O (α3)A' bremsstrahlung in e- -nucleon scattering so far considered. If the lifetime is sufficiently long to allow the A' to exit the dump, A'→e+e- decays could be easily detected and distinguished from backgrounds. We explore the foreseeable sensitivity of the Frascati PADME experiment in searching with this technique for the 17 MeV dark photon invoked to explain the Be 8 anomaly in nuclear transitions.

  11. Microscopic analysis of shape transition in neutron-deficient Yb isotopes

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Tong, H.; Wang, X. F.; Wang, H.; Wang, D. Q.; Wang, X. Y.; Yao, J. M.

    2018-01-01

    The development of nuclear collectivity in even-even Yb-170152 is studied with three types of mean-field calculations: the nonrelativistic Hartree-Fock plus BCS calculation using the Skyrme SLy4 force plus a density-dependent δ pairing force and the relativistic mean-field calculation using a point-coupling energy functional supplemented with either a density-independent δ pairing force or a separable pairing force. The low-lying states are obtained by solving a five-dimensional collective Hamiltonian with parameters determined from the three mean-field solutions. The energy surfaces, excitation energies, electric multiple transition strengths, and differential isotope shifts are presented in comparison with available data. Our results show that different treatments of pairing correlations have a significant influence on the speed of developing collectivity as the increase of neutron number. All the calculations demonstrate the important role of dynamic shape-mixing effects in resolving the puzzle in the dramatic increase of charge radius from 152Yb to 154Yb and the role of triaxiality in Yb 160 ,162 ,164 .

  12. Transportable Optical Lattice Clock with 7 ×10-17 Uncertainty

    NASA Astrophysics Data System (ADS)

    Koller, S. B.; Grotti, J.; Vogt, St.; Al-Masoudi, A.; Dörscher, S.; Häfner, S.; Sterr, U.; Lisdat, Ch.

    2017-02-01

    We present a transportable optical clock (TOC) with Sr 87 . Its complete characterization against a stationary lattice clock resulted in a systematic uncertainty of 7.4 ×10-17, which is currently limited by the statistics of the determination of the residual lattice light shift, and an instability of 1.3 ×10-15/√{τ } with an averaging time τ in seconds. Measurements confirm that the systematic uncertainty can be reduced to below the design goal of 1 ×10-17. To our knowledge, these are the best uncertainties and instabilities reported for any transportable clock to date. For autonomous operation, the TOC has been installed in an air-conditioned car trailer. It is suitable for chronometric leveling with submeter resolution as well as for intercontinental cross-linking of optical clocks, which is essential for a redefinition of the International System of Units (SI) second. In addition, the TOC will be used for high precision experiments for fundamental science that are commonly tied to precise frequency measurements and its development is an important step to space-borne optical clocks.

  13. Search for a Hypothetical 16.7 MeV Gauge Boson and Dark Photons in the NA64 Experiment at CERN

    NASA Astrophysics Data System (ADS)

    Banerjee, D.; Burtsev, V. E.; Chumakov, A. G.; Cooke, D.; Crivelli, P.; Depero, E.; Dermenev, A. V.; Donskov, S. V.; Dusaev, R. R.; Enik, T.; Charitonidis, N.; Feshchenko, A.; Frolov, V. N.; Gardikiotis, A.; Gerassimov, S. G.; Gninenko, S. N.; Hösgen, M.; Jeckel, M.; Karneyeu, A. E.; Kekelidze, G.; Ketzer, B.; Kirpichnikov, D. V.; Kirsanov, M. M.; Konorov, I. V.; Kovalenko, S. G.; Kramarenko, V. A.; Kravchuk, L. V.; Krasnikov, N. V.; Kuleshov, S. V.; Lyubovitskij, V. E.; Lysan, V.; Matveev, V. A.; Mikhailov, Yu. V.; Peshekhonov, D. V.; Polyakov, V. A.; Radics, B.; Rojas, R.; Rubbia, A.; Samoylenko, V. D.; Tikhomirov, V. O.; Tlisov, D. A.; Toropin, A. N.; Trifonov, A. Yu.; Vasilishin, B. I.; Vasquez Arenas, G.; Volkov, P. V.; Volkov, V.; Ulloa, P.; NA64 Collaboration

    2018-06-01

    We report the first results on a direct search for a new 16.7 MeV boson (X ) which could explain the anomalous excess of e+e- pairs observed in the excited Be* 8 nucleus decays. Because of its coupling to electrons, the X could be produced in the bremsstrahlung reaction e-Z →e-Z X by a 100 GeV e- beam incident on an active target in the NA64 experiment at the CERN Super Proton Synchrotron and observed through the subsequent decay into a e+e- pair. With 5.4 ×1010 electrons on target, no evidence for such decays was found, allowing us to set first limits on the X -e- coupling in the range 1.3 ×10-4≲ɛe≲4.2 ×10-4 excluding part of the allowed parameter space. We also set new bounds on the mixing strength of photons with dark photons (A') from nonobservation of the decay A'→e+e- of the bremsstrahlung A' with a mass ≲23 MeV .

  14. Effect of perfusion of bile salts solutions into the oesophagus of hiatal hernia patients and controls.

    PubMed Central

    Bachir, G S; Collis, J L

    1976-01-01

    Tests of the response to perfusion of the oesophagus were made in 54 patients divided into three groups. Group I consisted of patients with symptomatic hiatal hernia, group II hiatal hernia patients with peptic stricture, and group III normal individuals. Each individual oesophagus was perfused at a rate of 45-65 drops per minute over 25 minutes with six solutions: normal saline, N/10 HCl, taurine conjugates of bile salts in normal saline, taurine conjugates of bile salts in N/10 HCl, glycine conjugates of bile salts in normal saline, and taurine and glycine conjugates in a ratio of 1 to 2 in normal saline. It was found that acidified taurine solutions were more irritating than acid alone. With a 2mM/l solution of taurine in acid, symptoms are produced even in controls. With a 1 mM/l solution of the same conjugates, the majority of normal people feel slight heartburn or nothing, and therefore perfusion into the oesophagus of such a solution could be used as a test for oesophagitis. PMID:941112

  15. Two Clock Transitions in Neutral Yb for the Highest Sensitivity to Variations of the Fine-Structure Constant

    NASA Astrophysics Data System (ADS)

    Safronova, Marianna S.; Porsev, Sergey G.; Sanner, Christian; Ye, Jun

    2018-04-01

    We propose a new frequency standard based on a 4 f146 s 6 p P0 3 -4 f136 s25 d (J =2 ) transition in neutral Yb. This transition has a potential for high stability and accuracy and the advantage of the highest sensitivity among atomic clocks to variation of the fine-structure constant α . We find its dimensionless α -variation enhancement factor to be K =-15 , in comparison to the most sensitive current clock (Yb+ E 3 , K =-6 ), and it is 18 times larger than in any neutral-atomic clocks (Hg, K =0.8 ). Combined with the unprecedented stability of an optical lattice clock for neutral atoms, this high sensitivity opens new perspectives for searches for ultralight dark matter and for tests of theories beyond the standard model of elementary particles. Moreover, together with the well-established 1S0-3P0 transition, one will have two clock transitions operating in neutral Yb, whose interleaved interrogations may further reduce systematic uncertainties of such clock-comparison experiments.

  16. Enhancement of CLAIM (clinical accounting information) for a localized Chinese version.

    PubMed

    Guo, Jinqiu; Takada, Akira; Niu, Tie; He, Miao; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Takahashi, Kiwamu; Daimon, Hiroyuki; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki

    2005-10-01

    CLinical Accounting InforMation (CLAIM) is a standard for the exchange of data between patient accounting systems and electronic medical record (EMR) systems. It uses eXtensible Markup Language (XML) as a meta-language and was developed in Japan. CLAIM is subordinate to the Medical Markup Language (MML) standard, which allows the exchange of medical data between different medical institutions. It has inherited the basic structure of MML 2.x and the current version, version 2.1, contains two modules and nine data definition tables. In China, no data exchange standard yet exists that links EMR systems to accounting systems. Taking advantage of CLAIM's flexibility, we created a localized Chinese version based on CLAIM 2.1. Since Chinese receipt systems differ from those of Japan, some information such as prescription formats, etc. are also different from those in Japan. Two CLAIM modules were re-engineered and six data definition tables were either added or redefined. The Chinese version of CLAIM takes local needs into account, and consequently it is now possible to transfer data between the patient accounting systems and EMR systems of Chinese medical institutions effectively.

  17. Exact result in strong wave turbulence of thin elastic plates

    NASA Astrophysics Data System (ADS)

    Düring, Gustavo; Krstulovic, Giorgio

    2018-02-01

    An exact result concerning the energy transfers between nonlinear waves of a thin elastic plate is derived. Following Kolmogorov's original ideas in hydrodynamical turbulence, but applied to the Föppl-von Kármán equation for thin plates, the corresponding Kármán-Howarth-Monin relation and an equivalent of the 4/5 -Kolmogorov's law is derived. A third-order structure function involving increments of the amplitude, velocity, and the Airy stress function of a plate, is proven to be equal to -ɛ ℓ , where ℓ is a length scale in the inertial range at which the increments are evaluated and ɛ the energy dissipation rate. Numerical data confirm this law. In addition, a useful definition of the energy fluxes in Fourier space is introduced and proven numerically to be flat in the inertial range. The exact results derived in this Rapid Communication are valid for both weak and strong wave turbulence. They could be used as a theoretical benchmark of new wave-turbulence theories and to develop further analogies with hydrodynamical turbulence.

  18. Exploiting Non-sequence Data in Dynamic Model Learning

    DTIC Science & Technology

    2013-10-01

    For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in

  19. Lateral stability and control derivatives of a jet fighter airplane extracted from flight test data by utilizing maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Steinmetz, G. G.

    1972-01-01

    A method of parameter extraction for stability and control derivatives of aircraft from flight test data, implementing maximum likelihood estimation, has been developed and successfully applied to actual lateral flight test data from a modern sophisticated jet fighter. This application demonstrates the important role played by the analyst in combining engineering judgment and estimator statistics to yield meaningful results. During the analysis, the problems of uniqueness of the extracted set of parameters and of longitudinal coupling effects were encountered and resolved. The results for all flight runs are presented in tabular form and as time history comparisons between the estimated states and the actual flight test data.

  20. Effect of sampling rate and record length on the determination of stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Brenner, M. J.; Iliff, K. W.; Whitman, R. K.

    1978-01-01

    Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.

  1. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  2. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  3. Deterministic quantum annealing expectation-maximization algorithm

    NASA Astrophysics Data System (ADS)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  4. Nonlinear phase noise tolerance for coherent optical systems using soft-decision-aided ML carrier phase estimation enhanced with constellation partitioning

    NASA Astrophysics Data System (ADS)

    Li, Yan; Wu, Mingwei; Du, Xinwei; Xu, Zhuoran; Gurusamy, Mohan; Yu, Changyuan; Kam, Pooi-Yuen

    2018-02-01

    A novel soft-decision-aided maximum likelihood (SDA-ML) carrier phase estimation method and its simplified version, the decision-aided and soft-decision-aided maximum likelihood (DA-SDA-ML) methods are tested in a nonlinear phase noise-dominant channel. The numerical performance results show that both the SDA-ML and DA-SDA-ML methods outperform the conventional DA-ML in systems with constant-amplitude modulation formats. In addition, modified algorithms based on constellation partitioning are proposed. With partitioning, the modified SDA-ML and DA-SDA-ML are shown to be useful for compensating the nonlinear phase noise in multi-level modulation systems.

  5. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  6. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  7. Unicellular cyanobacteria Synechocystis accommodate heterotrophic bacteria with varied enzymatic and metal resistance properties.

    PubMed

    Abdulaziz, Anas; Sageer, Saliha; Chekidhenkuzhiyil, Jasmin; Vijayan, Vijitha; Pavanan, Pratheesh; Athiyanathil, Sujith; Nair, Shanta

    2016-08-01

    The interactions between heterotrophic bacteria and primary producers have a profound impact on the functioning of marine ecosystem. We characterized the enzymatic and metal resistance properties of fourteen heterotrophic bacteria isolated from a unicellular cyanobacterium Synechocystis sp. that came from a heavy metal contaminated region of Cochin estuary, southwest coast of India. Based on 16S rRNA gene sequence similarities, the heterotrophic bacteria were grouped into three phyla: namely Actinobacteria, Firmicute, and Proteobacteria. Overall Proteobacteria showed a higher level of enzyme expression while Actinobacteria and Firmicutes showed higher tolerance to heavy metals. Among Proteobacteria, an isolate of Marinobacter hydrocarbonoclasticus (MMRF-584) showed highest activities of β-glucosidase (1.58 ± 0.2 μMml(-1)  min(-1) ) and laminarinase (1170.17 ± 95.4 μgml(-1)  min(-1) ), while other two isolates of M. hydrocarbonoclasticus, MMRF-578 and 581, showed highest phosphatase (44.71 ± 0.2 μMml(-1)  min(-1) ) and aminopeptidase (33.22 ± 0 μMml(-1)  min(-1) ) activities respectively. Among Firmicutes, the Virgibacillus sp. MMRF-571 showed exceptional resistance against the toxic heavy metals Cd (180 mM), Pb (150 mM), and Hg (0.5 mM). Bacillus cereus, MMRF-575, showed resistance to the highest concentrations of Co (250 mM), Cd (150 mM), Pb (180 mM), Hg (0.5 mM), Ni (280 mM), and Zn (250 mM) tested. Our results show that heterotrophic bacteria with varied enzymatic and metal resistance properties are associated with Synechocystis sp. Further studies to delineate the role of these heterotrophic bacteria in protecting primary producers from toxic effects of heavy metals and their potential application in bioremediation will be appreciated. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. The comparison of acoustic and psychic parameters of subjective tinnitus.

    PubMed

    Karatas, Erkan; Deniz, Murat

    2012-02-01

    We aim to assess the correlation between audiometric data, and psychotic and acoustic measures associated with subjective tinnitus (ST) and to clarify the importance of the psychological process in determining the degree of subjective annoyance and disability due to tinnitus. Fifty-four patients experiencing unilateral ST were allocated for the study. Acoustic assessment of patients including LDL (loudness discomfort levels), MML (minimum masking level) and RI (residual inhibition) was performed. Tinnitus Handicap Inventory (THI), Beck Depression Inventory (BDI) and Visual Analog Scale (VAS) tests were performed for the psychological aspects of subjective annoyance. RI was positive in 23 patients with 13 frequency-matched stimuli at 8,000 Hz. Masking treatment response was successful in 16 RI-positive patients. Mean and standard deviation (SD) of THI scores were 38.77 ± 23.63. Ten patients (%18.51) with tinnitus had ≥ 17 points score, which was significant for BDI. Mean and SD were 5.01 ± 2.31 for VAS-1 scores (severity of tinnitus), 7.98 ± 2.79 for VAS-2 (frequency and duration of tinnitus), 5.77 ± 2.72 for VAS-3 (discomfort level), 3.56 ± 3.30 for VAS-4 (attention deficit) and 3.31 ± 3.31 for VAS-5 (sleep disorders). A significant correlation was found between the tinnitus duration time, age, gender and THI scores (P < 0.05). There were statistically significant correlations between VAS 1, 2, 3 scores and LDL, MML and RI (P > 0.05). RI might be largely frequency dependent and was found as an indicator for the masking treatment response. We did not notice statistically significant correlations between audiometric data and THI and BDI. There were correlations between with VAS and LDL and with MML and RI. VAS was simpler and easier for the assessment of ST. We should consider the psychological aspects of ST and assess it as a symptom separately with acoustic and psychotic tests.

  9. Precision Measurement of the Mass and Lifetime of the Ξb0 Baryon

    NASA Astrophysics Data System (ADS)

    Aaij, R.; Adeva, B.; Adinolfi, M.; Affolder, A.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Anderson, J.; Andreassen, R.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Aquines Gutierrez, O.; Archilli, F.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Bachmann, S.; Back, J. J.; Badalov, A.; Balagura, V.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Belogurov, S.; Belous, K.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Benton, J.; Berezhnoy, A.; Bernet, R.; Bettler, M.-O.; van Beuzekom, M.; Bien, A.; Bifani, S.; Bird, T.; Bizzeti, A.; Bjørnstad, P. M.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Bondar, A.; Bondar, N.; Bonivento, W.; Borghi, S.; Borgia, A.; Borsato, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Brambach, T.; van den Brand, J.; Bressieux, J.; Brett, D.; Britsch, M.; Britton, T.; Brodzicka, J.; Brook, N. H.; Brown, H.; Bursche, A.; Busetto, G.; Buytaert, J.; Cadeddu, S.; Calabrese, R.; Calvi, M.; Calvo Gomez, M.; Camboni, A.; Campana, P.; Campora Perez, D.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carranza-Mejia, H.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Cassina, L.; Castillo Garcia, L.; Cattaneo, M.; Cauet, Ch.; Cenci, R.; Charles, M.; Charpentier, Ph.; Chen, S.; Cheung, S.-F.; Chiapolini, N.; Chrzaszcz, M.; Ciba, K.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coco, V.; Cogan, J.; Cogneras, E.; Collins, P.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombes, M.; Coquereau, S.; Corti, G.; Corvo, M.; Counts, I.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Cruz Torres, M.; Cunliffe, S.; Currie, R.; D'Ambrosio, C.; Dalseno, J.; David, P.; David, P. N. Y.; Davis, A.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Silva, W.; De Simone, P.; Decamp, D.; Deckenhoff, M.; Del Buono, L.; Déléage, N.; Derkach, D.; Deschamps, O.; Dettori, F.; Di Canto, A.; Dijkstra, H.; Donleavy, S.; Dordei, F.; Dorigo, M.; Dosil Suárez, A.; Dossett, D.; Dovbnya, A.; Dreimanis, K.; Dujany, G.; Dupertuis, F.; Durante, P.; Dzhelyadin, R.; Dziurda, A.; Dzyuba, A.; Easo, S.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; El Rifai, I.; Elsasser, Ch.; Ely, S.; Esen, S.; Evans, H.-M.; Evans, T.; Falabella, A.; Färber, C.; Farinelli, C.; Farley, N.; Farry, S.; Ferguson, D.; Fernandez Albor, V.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fiore, M.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fontana, M.; Fontanelli, F.; Forty, R.; Francisco, O.; Frank, M.; Frei, C.; Frosini, M.; Fu, J.; Furfaro, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; Garofoli, J.; Garra Tico, J.; Garrido, L.; Gaspar, C.; Gauld, R.; Gavardi, L.; Gavrilov, G.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gianelle, A.; Giani', S.; Gibson, V.; Giubega, L.; Gligorov, V. V.; Göbel, C.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gordon, H.; Gotti, C.; Grabalosa Gándara, M.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graziani, G.; Grecu, A.; Greening, E.; Gregson, S.; Griffith, P.; Grillo, L.; Grünberg, O.; Gui, B.; Gushchin, E.; Guz, Yu.; Gys, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hall, S.; Hamilton, B.; Hampson, T.; Han, X.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; Hartmann, T.; He, J.; Head, T.; Heijne, V.; Hennessy, K.; Henrard, P.; Henry, L.; Hernando Morata, J. A.; van Herwijnen, E.; Heß, M.; Hicheur, A.; Hill, D.; Hoballah, M.; Hombach, C.; Hulsbergen, W.; Hunt, P.; Hussain, N.; Hutchcroft, D.; Hynds, D.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jaeger, A.; Jalocha, J.; Jans, E.; Jaton, P.; Jawahery, A.; Jing, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kaballo, M.; Kandybei, S.; Kanso, W.; Karacson, M.; Karbach, T. M.; Karodia, S.; Kelsey, M.; Kenyon, I. R.; Ketel, T.; Khanji, B.; Khurewathanakul, C.; Klaver, S.; Kochebina, O.; Kolpin, M.; Komarov, I.; Koopman, R. F.; Koppenburg, P.; Korolev, M.; Kozlinskiy, A.; Kravchuk, L.; Kreplin, K.; Kreps, M.; Krocker, G.; Krokovny, P.; Kruse, F.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kurek, K.; Kvaratskheliya, T.; La Thi, V. N.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lambert, D.; Lambert, R. W.; Lanciotti, E.; Lanfranchi, G.; Langenbruch, C.; Langhans, B.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Lees, J.-P.; Lefèvre, R.; Leflat, A.; Lefrançois, J.; Leo, S.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, Y.; Liles, M.; Lindner, R.; Linn, C.; Lionetto, F.; Liu, B.; Liu, G.; Lohn, S.; Longstaff, I.; Lopes, J. H.; Lopez-March, N.; Lowdon, P.; Lu, H.; Lucchesi, D.; Luo, H.; Lupato, A.; Luppi, E.; Lupton, O.; Machefert, F.; Machikhiliyan, I. V.; Maciuc, F.; Maev, O.; Malde, S.; Manca, G.; Mancinelli, G.; Maratas, J.; Marchand, J. F.; Marconi, U.; Marin Benito, C.; Marino, P.; Märki, R.; Marks, J.; Martellotti, G.; Martens, A.; Martín Sánchez, A.; Martinelli, M.; Martinez Santos, D.; Martinez Vidal, F.; Martins Tostes, D.; Massafferri, A.; Matev, R.; Mathe, Z.; Matteuzzi, C.; Mazurov, A.; McCann, M.; McCarthy, J.; McNab, A.; McNulty, R.; McSkelly, B.; Meadows, B.; Meier, F.; Meissner, M.; Merk, M.; Milanes, D. A.; Minard, M.-N.; Moggi, N.; Molina Rodriguez, J.; Monteil, S.; Morandin, M.; Morawski, P.; Mordà, A.; Morello, M. J.; Moron, J.; Morris, A.-B.; Mountain, R.; Muheim, F.; Müller, K.; Muresan, R.; Mussini, M.; Muster, B.; Naik, P.; Nakada, T.; Nandakumar, R.; Nasteva, I.; Needham, M.; Neri, N.; Neubert, S.; Neufeld, N.; Neuner, M.; Nguyen, A. D.; Nguyen, T. D.; Nguyen-Mau, C.; Nicol, M.; Niess, V.; Niet, R.; Nikitin, N.; Nikodem, T.; Novoselov, A.; O'Hanlon, D. P.; Oblakowska-Mucha, A.; Obraztsov, V.; Oggero, S.; Ogilvy, S.; Okhrimenko, O.; Oldeman, R.; Onderwater, G.; Orlandea, M.; Otalora Goicochea, J. M.; Owen, P.; Oyanguren, A.; Pal, B. K.; Palano, A.; Palombo, F.; Palutan, M.; Panman, J.; Papanestis, A.; Pappagallo, M.; Parkes, C.; Parkinson, C. J.; Passaleva, G.; Patel, G. D.; Patel, M.; Patrignani, C.; Pazos Alvarez, A.; Pearce, A.; Pellegrino, A.; Pepe Altarelli, M.; Perazzini, S.; Perez Trigo, E.; Perret, P.; Perrin-Terrin, M.; Pescatore, L.; Pesen, E.; Petridis, K.; Petrolini, A.; Picatoste Olloqui, E.; Pietrzyk, B.; Pilař, T.; Pinci, D.; Pistone, A.; Playfer, S.; Plo Casasus, M.; Polci, F.; Poluektov, A.; Polycarpo, E.; Popov, A.; Popov, D.; Popovici, B.; Potterat, C.; Price, E.; Prisciandaro, J.; Pritchard, A.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Punzi, G.; Qian, W.; Rachwal, B.; Rademacker, J. H.; Rakotomiaramanana, B.; Rama, M.; Rangel, M. S.; Raniuk, I.; Rauschmayr, N.; Raven, G.; Reichert, S.; Reid, M. M.; dos Reis, A. C.; Ricciardi, S.; Richards, S.; Rihl, M.; Rinnert, K.; Rives Molina, V.; Roa Romero, D. A.; Robbe, P.; Rodrigues, A. B.; Rodrigues, E.; Rodriguez Perez, P.; Roiser, S.; Romanovsky, V.; Romero Vidal, A.; Rotondo, M.; Rouvinet, J.; Ruf, T.; Ruffini, F.; Ruiz, H.; Ruiz Valls, P.; Sabatino, G.; Saborido Silva, J. J.; Sagidova, N.; Sail, P.; Saitta, B.; Salustino Guimaraes, V.; Sanchez Mayordomo, C.; Sanmartin Sedes, B.; Santacesaria, R.; Santamarina Rios, C.; Santovetti, E.; Sapunov, M.; Sarti, A.; Satriano, C.; Satta, A.; Saunders, D. M.; Savrie, M.; Savrina, D.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmidt, B.; Schneider, O.; Schopper, A.; Schune, M.-H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Seco, M.; Semennikov, A.; Sepp, I.; Serra, N.; Serrano, J.; Sestini, L.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, V.; Shires, A.; Silva Coutinho, R.; Simi, G.; Sirendi, M.; Skidmore, N.; Skwarnicki, T.; Smith, N. A.; Smith, E.; Smith, E.; Smith, J.; Smith, M.; Snoek, H.; Sokoloff, M. D.; Soler, F. J. P.; Soomro, F.; Souza, D.; Souza De Paula, B.; Spaan, B.; Sparkes, A.; Spradlin, P.; Stagni, F.; Stahl, M.; Stahl, S.; Steinkamp, O.; Stenyakin, O.; Stevenson, S.; Stoica, S.; Stone, S.; Storaci, B.; Stracka, S.; Straticiuc, M.; Straumann, U.; Stroili, R.; Subbiah, V. K.; Sun, L.; Sutcliffe, W.; Swientek, K.; Swientek, S.; Syropoulos, V.; Szczekowski, M.; Szczypka, P.; Szilard, D.; Szumlak, T.; T'Jampens, S.; Teklishyn, M.; Tellarini, G.; Teubert, F.; Thomas, C.; Thomas, E.; van Tilburg, J.; Tisserand, V.; Tobin, M.; Tolk, S.; Tomassetti, L.; Tonelli, D.; Topp-Joergensen, S.; Torr, N.; Tournefier, E.; Tourneur, S.; Tran, M. T.; Tresch, M.; Tsaregorodtsev, A.; Tsopelas, P.; Tuning, N.; Ubeda Garcia, M.; Ukleja, A.; Ustyuzhanin, A.; Uwer, U.; Vagnoni, V.; Valenti, G.; Vallier, A.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vázquez Sierra, C.; Vecchi, S.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Vesterinen, M.; Viaud, B.; Vieira, D.; Vieites Diaz, M.; Vilasis-Cardona, X.; Vollhardt, A.; Volyanskyy, D.; Voong, D.; Vorobyev, A.; Vorobyev, V.; Voß, C.; Voss, H.; de Vries, J. A.; Waldi, R.; Wallace, C.; Wallace, R.; Walsh, J.; Wandernoth, S.; Wang, J.; Ward, D. R.; Watson, N. K.; Websdale, D.; Whitehead, M.; Wicht, J.; Wiedner, D.; Wilkinson, G.; Williams, M. P.; Williams, M.; Wilson, F. F.; Wimberley, J.; Wishahi, J.; Wislicki, W.; Witek, M.; Wormser, G.; Wotton, S. A.; Wright, S.; Wu, S.; Wyllie, K.; Xie, Y.; Xing, Z.; Xu, Z.; Yang, Z.; Yuan, X.; Yushchenko, O.; Zangoli, M.; Zavertyaev, M.; Zhang, L.; Zhang, W. C.; Zhang, Y.; Zhelezov, A.; Zhokhov, A.; Zhong, L.; Zvyagin, A.; LHCb Collaboration

    2014-07-01

    Using a proton-proton collision data sample corresponding to an integrated luminosity of 3 fb-1 collected by LHCb at center-of-mass energies of 7 and 8 TeV, about 3800 Ξb0→Ξc+π-, Ξc+→pK-π+ signal decays are reconstructed. From this sample, the first measurement of the Ξb0 baryon lifetime is made, relative to that of the Λb0 baryon. The mass differences M(Ξb0)-M(Λb0) and M(Ξc+)-M(Λc+) are also measured with precision more than 4 times better than the current world averages. The resulting values are τ/Ξb0τΛb0=1.006±0.018±0.010,M(Ξb0)-M(Λb0)=172.44±0.39±0.17 MeV /c2,M(Ξc+)-M(Λc+)=181.51±0.14±0.10 MeV /c2,where the first uncertainty is statistical and the second is systematic. The relative rate of Ξb0 to Λb0 baryon production is measured to be f/Ξb0fΛb0B(Ξ/b0→Ξc+π-)B(Λb0→Λc+π-)B(Ξ/c+→pK-π+)B(Λc+→pK-π+)=(1.88±0.04±0.03)×10-2,where the first factor is the ratio of fragmentation fractions, b→Ξb0 relative to b→Λb0. Relative production rates as functions of transverse momentum and pseudorapidity are also presented.

  10. The amplitude and spectral index of the large angular scale anisotropy in the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Ganga, Ken; Page, Lyman; Cheng, Edward; Meyer, Stephan

    1994-01-01

    In many cosmological models, the large angular scale anisotropy in the cosmic microwave background is parameterized by a spectral index, n, and a quadrupolar amplitude, Q. For a Harrison-Peebles-Zel'dovich spectrum, n = 1. Using data from the Far Infrared Survey (FIRS) and a new statistical measure, a contour plot of the likelihood for cosmological models for which -1 less than n less than 3 and 0 equal to or less than Q equal to or less than 50 micro K is obtained. Depending upon the details of the analysis, the maximum likelihood occurs at n between 0.8 and 1.4 and Q between 18 and 21 micro K. Regardless of Q, the likelihood is always less than half its maximum for n less than -0.4 and for n greater than 2.2, as it is for Q less than 8 micro K and Q greater than 44 micro K.

  11. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less

  12. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    PubMed Central

    Gopich, Irina V.

    2015-01-01

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692

  13. A Computer Program for Solving a Set of Conditional Maximum Likelihood Equations Arising in the Rasch Model for Questionnaires.

    ERIC Educational Resources Information Center

    Andersen, Erling B.

    A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…

  14. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1993-01-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  15. Monte Carlo studies of ocean wind vector measurements by SCATT: Objective criteria and maximum likelihood estimates for removal of aliases, and effects of cell size on accuracy of vector winds

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.

    1982-01-01

    The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.

  16. Variational Bayesian Parameter Estimation Techniques for the General Linear Model

    PubMed Central

    Starke, Ludger; Ostwald, Dirk

    2017-01-01

    Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572

  17. Genetic distances and phylogenetic trees of different Awassi sheep populations based on DNA sequencing.

    PubMed

    Al-Atiyat, R M; Aljumaah, R S

    2014-08-27

    This study aimed to estimate evolutionary distances and to reconstruct phylogeny trees between different Awassi sheep populations. Thirty-two sheep individuals from three different geographical areas of Jordan and the Kingdom of Saudi Arabia (KSA) were randomly sampled. DNA was extracted from the tissue samples and sequenced using the T7 promoter universal primer. Different phylogenetic trees were reconstructed from 0.64-kb DNA sequences using the MEGA software with the best general time reverse distance model. Three methods of distance estimation were then used. The maximum composite likelihood test was considered for reconstructing maximum likelihood, neighbor-joining and UPGMA trees. The maximum likelihood tree indicated three major clusters separated by cytosine (C) and thymine (T). The greatest distance was shown between the South sheep and North sheep. On the other hand, the KSA sheep as an outgroup showed shorter evolutionary distance to the North sheep population than to the others. The neighbor-joining and UPGMA trees showed quite reliable clusters of evolutionary differentiation of Jordan sheep populations from the Saudi population. The overall results support geographical information and ecological types of the sheep populations studied. Summing up, the resulting phylogeny trees may contribute to the limited information about the genetic relatedness and phylogeny of Awassi sheep in nearby Arab countries.

  18. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  19. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    PubMed

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  20. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  1. Varied applications of a new maximum-likelihood code with complete covariance capability. [FERRET, for data adjustment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1978-01-01

    Applications of a new data-adjustment code are given. The method is based on a maximum-likelihood extension of generalized least-squares methods that allow complete covariance descriptions for the input data and the final adjusted data evaluations. The maximum-likelihood approach is used with a generalized log-normal distribution that provides a way to treat problems with large uncertainties and that circumvents the problem of negative values that can occur for physically positive quantities. The computer code, FERRET, is written to enable the user to apply it to a large variety of problems by modifying only the input subroutine. The following applications are discussed:more » A 75-group a priori damage function is adjusted by as much as a factor of two by use of 14 integral measurements in different reactor spectra. Reactor spectra and dosimeter cross sections are simultaneously adjusted on the basis of both integral measurements and experimental proton-recoil spectra. The simultaneous use of measured reaction rates, measured worths, microscopic measurements, and theoretical models are used to evaluate dosimeter and fission-product cross sections. Applications in the data reduction of neutron cross section measurements and in the evaluation of reactor after-heat are also considered. 6 figures.« less

  2. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  3. On the quirks of maximum parsimony and likelihood on phylogenetic networks.

    PubMed

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2017-03-21

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  5. Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity

    USGS Publications Warehouse

    Manly, Bryan F.J.; Schmutz, Joel A.

    2001-01-01

    The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.

  6. Missing data methods for dealing with missing items in quality of life questionnaires. A comparison by simulation of personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques applied to the SF-36 in the French 2003 decennial health survey.

    PubMed

    Peyre, Hugo; Leplège, Alain; Coste, Joël

    2011-03-01

    Missing items are common in quality of life (QoL) questionnaires and present a challenge for research in this field. It remains unclear which of the various methods proposed to deal with missing data performs best in this context. We compared personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques using various realistic simulation scenarios of item missingness in QoL questionnaires constructed within the framework of classical test theory. Samples of 300 and 1,000 subjects were randomly drawn from the 2003 INSEE Decennial Health Survey (of 23,018 subjects representative of the French population and having completed the SF-36) and various patterns of missing data were generated according to three different item non-response rates (3, 6, and 9%) and three types of missing data (Little and Rubin's "missing completely at random," "missing at random," and "missing not at random"). The missing data methods were evaluated in terms of accuracy and precision for the analysis of one descriptive and one association parameter for three different scales of the SF-36. For all item non-response rates and types of missing data, multiple imputation and full information maximum likelihood appeared superior to the personal mean score and especially to hot deck in terms of accuracy and precision; however, the use of personal mean score was associated with insignificant bias (relative bias <2%) in all studied situations. Whereas multiple imputation and full information maximum likelihood are confirmed as reference methods, the personal mean score appears nonetheless appropriate for dealing with items missing from completed SF-36 questionnaires in most situations of routine use. These results can reasonably be extended to other questionnaires constructed according to classical test theory.

  7. From Archi Torture to Architecture: Undergraduate Students Design and Implement Computers Using the Multimedia Logic Emulator

    ERIC Educational Resources Information Center

    Stanley, Timothy D.; Wong, Lap Kei; Prigmore, Daniel; Benson, Justin; Fishler, Nathan; Fife, Leslie; Colton, Don

    2007-01-01

    Students learn better when they both hear and do. In computer architecture courses "doing" can be difficult in small schools without hardware laboratories hosted by computer engineering, electrical engineering, or similar departments. Software solutions exist. Our success with George Mills' Multimedia Logic (MML) is the focus of this paper. MML…

  8. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Comparison of image deconvolution algorithms on simulated and laboratory infrared images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, D.

    1994-11-15

    We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.

  10. Testing deep reticulate evolution in Amaryllidaceae Tribe Hippeastreae (Asparagales) with ITS and chloroplast sequence data

    USDA-ARS?s Scientific Manuscript database

    The phylogeny of Amaryllidaceae tribe Hippeastreae was inferred using chloroplast (3’ycf1, ndhF, trnL-F) and nuclear (ITS rDNA) sequence data under maximum parsimony and maximum likelihood frameworks. Network analyses were applied to resolve conflicting signals among data sets and putative scenarios...

  11. Phylogenetic analyses of RPB1 and RPB2 support a middle Cretaceous origin for a clade comprising all agriculturally and medically important fusaria

    USDA-ARS?s Scientific Manuscript database

    Fusarium (Hypocreales, Nectriaceae) is one of the most economically important and systematically challenging groups of mycotoxigenic phytopathogens and emergent human pathogens. We conducted maximum likelihood (ML), maximum parsimony (MP) and Bayesian (B) analyses on partial RNA polymerase largest (...

  12. Multiple-hit parameter estimation in monolithic detectors.

    PubMed

    Hunter, William C J; Barrett, Harrison H; Lewellen, Tom K; Miyaoka, Robert S

    2013-02-01

    We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%-12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied.

  13. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  14. Sparse representation and dictionary learning penalized image reconstruction for positron emission tomography.

    PubMed

    Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei

    2015-01-21

    Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.

  15. A maximum likelihood analysis of the CoGeNT public dataset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelso, Chris, E-mail: ckelso@unf.edu

    The CoGeNT detector, located in the Soudan Underground Laboratory in Northern Minnesota, consists of a 475 grams (fiducial mass of 330 grams) target mass of p-type point contact germanium detector that measures the ionization charge created by nuclear recoils. This detector has searched for recoils created by dark matter since December of 2009. We analyze the public dataset from the CoGeNT experiment to search for evidence of dark matter interactions with the detector. We perform an unbinned maximum likelihood fit to the data and compare the significance of different WIMP hypotheses relative to each other and the null hypothesis ofmore » no WIMP interactions. This work presents the current status of the analysis.« less

  16. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  17. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  18. A real-time digital program for estimating aircraft stability and control parameters from flight test data by using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Mayhew, S. C.

    1973-01-01

    A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.

  19. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Maximum likelihood inference implies a high, not a low, ancestral haploid chromosome number in Araceae, with a critique of the bias introduced by ‘x’

    PubMed Central

    Cusimano, Natalie; Sousa, Aretuza; Renner, Susanne S.

    2012-01-01

    Background and Aims For 84 years, botanists have relied on calculating the highest common factor for series of haploid chromosome numbers to arrive at a so-called basic number, x. This was done without consistent (reproducible) reference to species relationships and frequencies of different numbers in a clade. Likelihood models that treat polyploidy, chromosome fusion and fission as events with particular probabilities now allow reconstruction of ancestral chromosome numbers in an explicit framework. We have used a modelling approach to reconstruct chromosome number change in the large monocot family Araceae and to test earlier hypotheses about basic numbers in the family. Methods Using a maximum likelihood approach and chromosome counts for 26 % of the 3300 species of Araceae and representative numbers for each of the other 13 families of Alismatales, polyploidization events and single chromosome changes were inferred on a genus-level phylogenetic tree for 113 of the 117 genera of Araceae. Key Results The previously inferred basic numbers x = 14 and x = 7 are rejected. Instead, maximum likelihood optimization revealed an ancestral haploid chromosome number of n = 16, Bayesian inference of n = 18. Chromosome fusion (loss) is the predominant inferred event, whereas polyploidization events occurred less frequently and mainly towards the tips of the tree. Conclusions The bias towards low basic numbers (x) introduced by the algebraic approach to inferring chromosome number changes, prevalent among botanists, may have contributed to an unrealistic picture of ancestral chromosome numbers in many plant clades. The availability of robust quantitative methods for reconstructing ancestral chromosome numbers on molecular phylogenetic trees (with or without branch length information), with confidence statistics, makes the calculation of x an obsolete approach, at least when applied to large clades. PMID:22210850

  1. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  2. Methods of Measurement of High Air Velocities by the Hot Wire Method

    DTIC Science & Technology

    1943-02-01

    to that of the heating current, as indicated by the minus sign. The cathode bias of the linearizing stage 1» then adjusted to obtain readings that...and tungsten wire. ! MM Taobaloai lot* lo. tifx» 3.- Heating aunnt of a bot «in at ooutaat-railitaae* operation. ? a UM tMMlMl nta Fe. IN nca . 4

  3. A Comparison of Accelerometers for Predicting Energy Expenditure and Vertical Ground Reaction Force in School-Age Children

    ERIC Educational Resources Information Center

    Garcia, Anne W.; Langenthal, Carla R.; Angulo-Barroso, Rosa M.; Gross, M. Melissa

    2004-01-01

    In this pilot study of 16 children, we evaluated the reliability and validity of three accelerometers (Mini-Motionlogger [MML], Computer Science Applications, Inc. Actigraph [CSA], and BioTrainer) as indicators of energy expenditure and vertical ground reaction force. The children wore 2 of each type of monitor while they walked, ran, and…

  4. Measurements of Multiparticle Correlations in d +Au Collisions at 200, 62.4, 39, and 19.6 GeV and p +Au Collisions at 200 GeV and Implications for Collective Behavior

    NASA Astrophysics Data System (ADS)

    Aidala, C.; Akiba, Y.; Alfred, M.; Andrieux, V.; Aoki, K.; Apadula, N.; Asano, H.; Ayuso, C.; Azmoun, B.; Babintsev, V.; Bagoly, A.; Bandara, N. S.; Barish, K. N.; Bathe, S.; Bazilevsky, A.; Beaumier, M.; Belmont, R.; Berdnikov, A.; Berdnikov, Y.; Blau, D. S.; Boer, M.; Bok, J. S.; Brooks, M. L.; Bryslawskyj, J.; Bumazhnov, V.; Butler, C.; Campbell, S.; Canoa Roman, V.; Cervantes, R.; Chi, C. Y.; Chiu, M.; Choi, I. J.; Choi, J. B.; Citron, Z.; Connors, M.; Cronin, N.; Csanád, M.; Csörgő, T.; Danley, T. W.; Daugherity, M. S.; David, G.; Deblasio, K.; Dehmelt, K.; Denisov, A.; Deshpande, A.; Desmond, E. J.; Dion, A.; Dixit, D.; Do, J. H.; Drees, A.; Drees, K. A.; Dumancic, M.; Durham, J. M.; Durum, A.; Elder, T.; Enokizono, A.; En'yo, H.; Esumi, S.; Fadem, B.; Fan, W.; Feege, N.; Fields, D. E.; Finger, M.; Finger, M.; Fokin, S. L.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fukuda, Y.; Gal, C.; Gallus, P.; Garg, P.; Ge, H.; Giordano, F.; Goto, Y.; Grau, N.; Greene, S. V.; Grosse Perdekamp, M.; Gunji, T.; Guragain, H.; Hachiya, T.; Haggerty, J. S.; Hahn, K. I.; Hamagaki, H.; Hamilton, H. F.; Han, S. Y.; Hanks, J.; Hasegawa, S.; Haseler, T. O. S.; He, X.; Hemmick, T. K.; Hill, J. C.; Hill, K.; Hodges, A.; Hollis, R. S.; Homma, K.; Hong, B.; Hoshino, T.; Hotvedt, N.; Huang, J.; Huang, S.; Imai, K.; Imrek, J.; Inaba, M.; Iordanova, A.; Isenhower, D.; Ito, Y.; Ivanishchev, D.; Jacak, B. V.; Jezghani, M.; Ji, Z.; Jiang, X.; Johnson, B. M.; Jorjadze, V.; Jouan, D.; Jumper, D. S.; Kang, J. H.; Kapukchyan, D.; Karthas, S.; Kawall, D.; Kazantsev, A. V.; Khachatryan, V.; Khanzadeev, A.; Kim, C.; Kim, D. J.; Kim, E.-J.; Kim, M.; Kim, M. H.; Kincses, D.; Kistenev, E.; Klatsky, J.; Kline, P.; Koblesky, T.; Kotov, D.; Kudo, S.; Kurita, K.; Kwon, Y.; Lajoie, J. G.; Lallow, E. O.; Lebedev, A.; Lee, S.; Lee, S. H.; Leitch, M. J.; Leung, Y. H.; Lewis, N. A.; Li, X.; Lim, S. H.; Liu, L. D.; Liu, M. X.; Loggins, V.-R.; Lökös, S.; Lovasz, K.; Lynch, D.; Majoros, T.; Makdisi, Y. I.; Makek, M.; Malaev, M.; Manko, V. I.; Mannel, E.; Masuda, H.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; McKinney, C.; Mendoza, M.; Metzger, W. J.; Mignerey, A. C.; Mihalik, D. E.; Milov, A.; Mishra, D. K.; Mitchell, J. T.; Mitsuka, G.; Miyasaka, S.; Mizuno, S.; Montuenga, P.; Moon, T.; Morrison, D. P.; Morrow, S. I. M.; Murakami, T.; Murata, J.; Nagai, K.; Nagashima, K.; Nagashima, T.; Nagle, J. L.; Nagy, M. I.; Nakagawa, I.; Nakagomi, H.; Nakano, K.; Nattrass, C.; Niida, T.; Nouicer, R.; Novák, T.; Novitzky, N.; Novotny, R.; Nyanin, A. S.; O'Brien, E.; Ogilvie, C. A.; Orjuela Koop, J. D.; Osborn, J. D.; Oskarsson, A.; Ottino, G. J.; Ozawa, K.; Pantuev, V.; Papavassiliou, V.; Park, J. S.; Park, S.; Pate, S. F.; Patel, M.; Peng, W.; Perepelitsa, D. V.; Perera, G. D. N.; Peressounko, D. Yu.; Perezlara, C. E.; Perry, J.; Petti, R.; Phipps, M.; Pinkenburg, C.; Pisani, R. P.; Pun, A.; Purschke, M. L.; Radzevich, P. V.; Read, K. F.; Reynolds, D.; Riabov, V.; Riabov, Y.; Richford, D.; Rinn, T.; Rolnick, S. D.; Rosati, M.; Rowan, Z.; Runchey, J.; Safonov, A. S.; Sakaguchi, T.; Sako, H.; Samsonov, V.; Sarsour, M.; Sato, K.; Sato, S.; Schaefer, B.; Schmoll, B. K.; Sedgwick, K.; Seidl, R.; Sen, A.; Seto, R.; Sexton, A.; Sharma, D.; Shein, I.; Shibata, T.-A.; Shigaki, K.; Shimomura, M.; Shioya, T.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Singh, B. K.; Singh, C. P.; Singh, V.; Skoby, M. J.; Slunečka, M.; Smith, K. L.; Snowball, M.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Stankus, P. W.; Stoll, S. P.; Sugitate, T.; Sukhanov, A.; Sumita, T.; Sun, J.; Syed, S.; Sziklai, J.; Takeda, A.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Tarnai, G.; Tieulent, R.; Timilsina, A.; Todoroki, T.; Tomášek, M.; Towell, C. L.; Towell, R. S.; Tserruya, I.; Ueda, Y.; Ujvari, B.; van Hecke, H. W.; Vazquez-Carson, S.; Velkovska, J.; Virius, M.; Vrba, V.; Vukman, N.; Wang, X. R.; Wang, Z.; Watanabe, Y.; Watanabe, Y. S.; Wong, C. P.; Woody, C. L.; Xu, C.; Xu, Q.; Xue, L.; Yalcin, S.; Yamaguchi, Y. L.; Yamamoto, H.; Yanovich, A.; Yin, P.; Yoo, J. H.; Yoon, I.; Yu, H.; Yushmanov, I. E.; Zajc, W. A.; Zelenski, A.; Zharko, S.; Zou, L.; Phenix Collaboration

    2018-02-01

    Recently, multiparticle-correlation measurements of relativistic p /d /He 3 +Au , p +Pb , and even p +p collisions show surprising collective signatures. Here, we present beam-energy-scan measurements of two-, four-, and six-particle angular correlations in d +Au collisions at √{sN N}=200 , 62.4, 39, and 19.6 GeV. We also present measurements of two- and four-particle angular correlations in p +Au collisions at √{sN N}=200 GeV . We find the four-particle cumulant to be real valued for d +Au collisions at all four energies. We also find that the four-particle cumulant in p +Au has the opposite sign as that in d +Au . Further, we find that the six-particle cumulant agrees with the four-particle cumulant in d +Au collisions at 200 GeV, indicating that nonflow effects are subdominant. These observations provide strong evidence that the correlations originate from the initial geometric configuration, which is then translated into the momentum distribution for all particles, commonly referred to as collectivity.

  5. Stochastic density functional theory at finite temperatures

    NASA Astrophysics Data System (ADS)

    Cytter, Yael; Rabani, Eran; Neuhauser, Daniel; Baer, Roi

    2018-03-01

    Simulations in the warm dense matter regime using finite temperature Kohn-Sham density functional theory (FT-KS-DFT), while frequently used, are computationally expensive due to the partial occupation of a very large number of high-energy KS eigenstates which are obtained from subspace diagonalization. We have developed a stochastic method for applying FT-KS-DFT, that overcomes the bottleneck of calculating the occupied KS orbitals by directly obtaining the density from the KS Hamiltonian. The proposed algorithm scales as O (" close=")N3T3)">N T-1 and is compared with the high-temperature limit scaling O Enhanced low-energy γ -decay strength of 70Ni and its robustness within the shell model

    NASA Astrophysics Data System (ADS)

    Larsen, A. C.; Midtbø, J. E.; Guttormsen, M.; Renstrøm, T.; Liddick, S. N.; Spyrou, A.; Karampagia, S.; Brown, B. A.; Achakovskiy, O.; Kamerdzhiev, S.; Bleuel, D. L.; Couture, A.; Campo, L. Crespo; Crider, B. P.; Dombos, A. C.; Lewis, R.; Mosby, S.; Naqvi, F.; Perdikakis, G.; Prokop, C. J.; Quinn, S. J.; Siem, S.

    2018-05-01

    Neutron-capture reactions on very neutron-rich nuclei are essential for heavy-element nucleosynthesis through the rapid neutron-capture process, now shown to take place in neutron-star merger events. For these exotic nuclei, radiative neutron capture is extremely sensitive to their γ -emission probability at very low γ energies. In this work, we present measurements of the γ -decay strength of 70Ni over the wide range 1.3 ≤Eγ≤8 MeV. A significant enhancement is found in the γ -decay strength for transitions with Eγ<3 MeV. At present, this is the most neutron-rich nucleus displaying this feature, proving that this phenomenon is not restricted to stable nuclei. We have performed E 1 -strength calculations within the quasiparticle time-blocking approximation, which describe our data above Eγ≃5 MeV very well. Moreover, large-scale shell-model calculations indicate an M 1 nature of the low-energy γ strength. This turns out to be remarkably robust with respect to the choice of interaction, truncation, and model space, and we predict its presence in the whole isotopic chain, in particular the neutron-rich Ni 72 ,74 ,76 .

  6. PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting

    PubMed Central

    Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693

  7. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  8. Maximum likelihood decoding of Reed Solomon Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudan, M.

    We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihoodmore » decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.« less

  9. Mapping grass communities based on multi-temporal Landsat TM imagery and environmental variables

    NASA Astrophysics Data System (ADS)

    Zeng, Yuandi; Liu, Yanfang; Liu, Yaolin; de Leeuw, Jan

    2007-06-01

    Information on the spatial distribution of grass communities in wetland is increasingly recognized as important for effective wetland management and biological conservation. Remote sensing techniques has been proved to be an effective alternative to intensive and costly ground surveys for mapping grass community. However, the mapping accuracy of grass communities in wetland is still not preferable. The aim of this paper is to develop an effective method to map grass communities in Poyang Lake Natural Reserve. Through statistic analysis, elevation is selected as an environmental variable for its high relationship with the distribution of grass communities; NDVI stacked from images of different months was used to generate Carex community map; the image in October was used to discriminate Miscanthus and Cynodon communities. Classifications were firstly performed with maximum likelihood classifier using single date satellite image with and without elevation; then layered classifications were performed using multi-temporal satellite imagery and elevation with maximum likelihood classifier, decision tree and artificial neural network separately. The results show that environmental variables can improve the mapping accuracy; and the classification with multitemporal imagery and elevation is significantly better than that with single date image and elevation (p=0.001). Besides, maximum likelihood (a=92.71%, k=0.90) and artificial neural network (a=94.79%, k=0.93) perform significantly better than decision tree (a=86.46%, k=0.83).

  10. Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.

    PubMed

    Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E

    2017-12-11

    Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.

  11. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  12. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  13. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  14. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  15. MultiPhyl: a high-throughput phylogenomics webserver using distributed computing

    PubMed Central

    Keane, Thomas M.; Naughton, Thomas J.; McInerney, James O.

    2007-01-01

    With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php. PMID:17553837

  16. The effect of natural and anthropogenic factors on sorption of copper in chernozem

    NASA Astrophysics Data System (ADS)

    Bauer, Tatiana; Minkina, Tatiana; Mandzhieva, Saglara; Pinskii, David; Linnik, Vitaly; Sushkova, Svetlana

    2016-04-01

    The aim of this work was to study the effect of the attendant anions and particle-size distribution on the adsorption of copper by ordinary chernozem. Solutions of HM nitrates, acetates, chlorides, and sulfates were used to study the effect of the chemical composition of added copper salts on the adsorption of copper by an ordinary chernozem. Samples of the soil sieved through a 1-mm sieve in the natural ionic form and soil fraction with different particle size (clay - the particle with size < 1μm and physical clay < 10 μm) were treated with solutions of the corresponding copper salts at a soil : solution ratio of 1:10. The concentrations of the initial copper solutions were 0.02, 0.05, 0.08, 0.1, 0.3, 0.5, and 1.0 mM/L. The range of Cu2+ concentrations in the studied system covers different geochemical situations corresponding to the actual levels of soil contamination with the metal under study. The suspensions were shaken for 1 h, left to stand for 24 h, and then filtered. The contents of the HM in the filtrates were determined by atomic absorption spectrometry (AAS). The contents of the adsorbed copper cations were calculated from the difference between the metal concentrations in the initial and equilibrium solutions. The isotherms of copper adsorption from the metal nitrate, chloride, and sulfate solutions have near linear shapes and, hence, can be satisfactorily described by a Henry or Freundlich equation: Cads = KH •Ceq.(1) Cads = KF •Ceqn,(2) where Cadsis the content of the adsorbed cations, mM/kg soil;Ceq is the concentration of copper in the equilibrium solution, mM/L; KH and KF denote the Henry and Freundlich adsorption coefficients, respectively, kg/L. The isotherm of Cu2+ adsorption by ordinary chernozem from acetate solutions is described by the Langmuir equation: Cads = C∞ÊLC / (1 + ÊLC), (3) where Cadsis the content of the adsorbed cations, mM/kg soil;C∞ is the maximum adsorption of the HM, mM/kg soil; ÊL is the affinity constant, L/mM; C is the concentration of the HM in the equilibrium solution, mM/L. According to the values of KH, the binding strength of the copper cations adsorbed from different salt solutions decreases in the series: Cu(Ac)2(1880,5± 76,2) > CuCl2(1442,8±113,5) > Cu(NO3)2(911,4 ± 31,1) >> CuSO4(165,3 ± 12,9). Thus, copper is most strongly adsorbed from the acetate solution and least strongly from the sulfate solution. The adsorption of copper by clay and physical clay fractions from the ordinary chernozem was of limited character and followed the (3) equation. In the particle-size fractions separated from the soils, the concentrations of copper decreased with the decreasing particle size. The values of ÊL and C∞characterizing the HM adsorption by the chernozem and its particle-size fractions formed the following sequence: clay (80,20±20,29 and 28,45±0,46 > physical clay (58,20±14,54 and 22,15±1,22) > entire soil (38,80±12,33 and 17,58±3,038). This work was supported by the Russian Ministry of Education and Science, project no. 5.885.2014/K, Russian Foundation for Basic Research, projects no. 14-05-00586 À

  17. Multiple-Hit Parameter Estimation in Monolithic Detectors

    PubMed Central

    Barrett, Harrison H.; Lewellen, Tom K.; Miyaoka, Robert S.

    2014-01-01

    We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%–12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied. PMID:23193231

  18. Proportion estimation using prior cluster purities

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    The prior distribution of CLASSY component purities is studied, and this information incorporated into maximum likelihood crop proportion estimators. The method is tested on Transition Year spring small grain segments.

  19. Glutamate receptor-channel gating. Maximum likelihood analysis of gigaohm seal recordings from locust muscle.

    PubMed Central

    Bates, S E; Sansom, M S; Ball, F G; Ramsey, R L; Usherwood, P N

    1990-01-01

    Gigaohm recordings have been made from glutamate receptor channels in excised, outside-out patches of collagenase-treated locust muscle membrane. The channels in the excised patches exhibit the kinetic state switching first seen in megaohm recordings from intact muscle fibers. Analysis of channel dwell time distributions reveals that the gating mechanism contains at least four open states and at least four closed states. Dwell time autocorrelation function analysis shows that there are at least three gateways linking the open states of the channel with the closed states. A maximum likelihood procedure has been used to fit six different gating models to the single channel data. Of these models, a cooperative model yields the best fit, and accurately predicts most features of the observed channel gating kinetics. PMID:1696510

  1. Approximated mutual information training for speech recognition using myoelectric signals.

    PubMed

    Guo, Hua J; Chan, A D C

    2006-01-01

    A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.

  2. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  3. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  4. Systems identification using a modified Newton-Raphson method: A FORTRAN program

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Iliff, K. W.

    1972-01-01

    A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.

  5. A matrix-based method of moments for fitting the multivariate random effects model for meta-analysis and meta-regression

    PubMed Central

    Jackson, Dan; White, Ian R; Riley, Richard D

    2013-01-01

    Multivariate meta-analysis is becoming more commonly used. Methods for fitting the multivariate random effects model include maximum likelihood, restricted maximum likelihood, Bayesian estimation and multivariate generalisations of the standard univariate method of moments. Here, we provide a new multivariate method of moments for estimating the between-study covariance matrix with the properties that (1) it allows for either complete or incomplete outcomes and (2) it allows for covariates through meta-regression. Further, for complete data, it is invariant to linear transformations. Our method reduces to the usual univariate method of moments, proposed by DerSimonian and Laird, in a single dimension. We illustrate our method and compare it with some of the alternatives using a simulation study and a real example. PMID:23401213

  6. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  7. Estimation of longitudinal stability and control derivatives for an icing research aircraft from flight data

    NASA Technical Reports Server (NTRS)

    Batterson, James G.; Omara, Thomas M.

    1989-01-01

    The results of applying a modified stepwise regression algorithm and a maximum likelihood algorithm to flight data from a twin-engine commuter-class icing research aircraft are presented. The results are in the form of body-axis stability and control derivatives related to the short-period, longitudinal motion of the aircraft. Data were analyzed for the baseline (uniced) and for the airplane with an artificial glaze ice shape attached to the leading edge of the horizontal tail. The results are discussed as to the accuracy of the derivative estimates and the difference between the derivative values found for the baseline and the iced airplane. Additional comparisons were made between the maximum likelihood results and the modified stepwise regression results with causes for any discrepancies postulated.

  8. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  9. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and

  10. Preparation of a Homologous (Human) Intravenous Botulinal Immune Globulin.

    DTIC Science & Technology

    1983-05-01

    lipoprotein ( HDL ) per ml of plasma to ŗ.06 mg/ml for beta- lipoprotein (LDL). Triglyceride and cholesterol levels were intermediate within this...OF LIPOPROTEIN DURING FRACTIONATION "( HDL ) (LDL) Triglyceride Cholesterol cxLipoprotein 8 LipoproteinSample mg/ml mg/ml mg/mi m/ml IVBG-l.A:"Plasma...plasminogen, prekallikrein, triglycerides , cholesterol , alpha- lipoprotein , beta- lipoprotein , clotting factors, fibrinogen and complement

  11. Applications of non-standard maximum likelihood techniques in energy and resource economics

    NASA Astrophysics Data System (ADS)

    Moeltner, Klaus

    Two important types of non-standard maximum likelihood techniques, Simulated Maximum Likelihood (SML) and Pseudo-Maximum Likelihood (PML), have only recently found consideration in the applied economic literature. The objective of this thesis is to demonstrate how these methods can be successfully employed in the analysis of energy and resource models. Chapter I focuses on SML. It constitutes the first application of this technique in the field of energy economics. The framework is as follows: Surveys on the cost of power outages to commercial and industrial customers usually capture multiple observations on the dependent variable for a given firm. The resulting pooled data set is censored and exhibits cross-sectional heterogeneity. We propose a model that addresses these issues by allowing regression coefficients to vary randomly across respondents and by using the Geweke-Hajivassiliou-Keane simulator and Halton sequences to estimate high-order cumulative distribution terms. This adjustment requires the use of SML in the estimation process. Our framework allows for a more comprehensive analysis of outage costs than existing models, which rely on the assumptions of parameter constancy and cross-sectional homogeneity. Our results strongly reject both of these restrictions. The central topic of the second Chapter is the use of PML, a robust estimation technique, in count data analysis of visitor demand for a system of recreation sites. PML has been popular with researchers in this context, since it guards against many types of mis-specification errors. We demonstrate, however, that estimation results will generally be biased even if derived through PML if the recreation model is based on aggregate, or zonal data. To countervail this problem, we propose a zonal model of recreation that captures some of the underlying heterogeneity of individual visitors by incorporating distributional information on per-capita income into the aggregate demand function. This adjustment eliminates the unrealistic constraint of constant income across zonal residents, and thus reduces the risk of aggregation bias in estimated macro-parameters. The corrected aggregate specification reinstates the applicability of PML. It also increases model efficiency, and allows-for the generation of welfare estimates for population subgroups.

  12. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  13. Interim Scientific Report: AFOSR-81-0122.

    DTIC Science & Technology

    1983-05-05

    Maximum likelihood. 2 Periton Lane, Mine-head, TA24 8AQ , England .... ...• .r- . ’ ’ "fl’ ’ ’ " .. ...... ’ ’"’ ’ - ’: , t i .a....,: Attachment 5

  14. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  15. Integrated Efforts for Analysis of Geophysical Measurements and Models.

    DTIC Science & Technology

    1997-09-26

    12b. DISTRIBUTION CODE 13. ABSTRACT ( Maximum 200 words) This contract supported investigations of integrated applications of physics, ephemerides...REGIONS AND GPS DATA VALIDATIONS 20 2.5 PL-SCINDA: VISUALIZATION AND ANALYSIS TECHNIQUES 22 2.5.1 View Controls 23 2.5.2 Map Selection...and IR data, about cloudy pixels. Clustering and maximum likelihood classification algorithms categorize up to four cloud layers into stratiform or

  16. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  17. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  18. Maximum likelihood: Extracting unbiased information from complex networks

    NASA Astrophysics Data System (ADS)

    Garlaschelli, Diego; Loffredo, Maria I.

    2008-07-01

    The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.

  19. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    PubMed

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  20. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

Top