Sample records for row-action maximum likelihood

  1. Soft decoding a self-dual (48, 24; 12) code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  2. The Order-Restricted Association Model: Two Estimation Algorithms and Issues in Testing

    ERIC Educational Resources Information Center

    Galindo-Garre, Francisca; Vermunt, Jeroen K.

    2004-01-01

    This paper presents a row-column (RC) association model in which the estimated row and column scores are forced to be in agreement with a priori specified ordering. Two efficient algorithms for finding the order-restricted maximum likelihood (ML) estimates are proposed and their reliability under different degrees of association is investigated by…

  3. Description and phylogenetic relationships of a new genus and two new species of lizards from Brazilian Amazonia, with nomenclatural comments on the taxonomy of Gymnophthalmidae (Reptilia: Squamata).

    PubMed

    Colli, Guarino R; Hoogmoed, Marinus S; Cannatella, David C; Cassimiro, José; Gomes, Jerriane Oliveira; Ghellere, José Mário; Gomes, Jerriane Oliveira; Ghellere, José Mário; Nunes, Pedro M Sales; Pellegrino, Kátia C M; Salerno, Patricia; Souza, Sergio Marques De; Rodrigues, Miguel Trefaut

    2015-08-18

    We describe a new genus and two new species of gymnophthalmid lizards based on specimens collected from Brazilian Amazonia, mostly in the "arc of deforestation". The new genus is easily distinguished from other Gymnophthalmidae by having very wide, smooth, and imbricate nuchals, arranged in two longitudinal and 6-10 transverse rows from nape to brachium level, followed by much narrower, strongly keeled, lanceolate, and mucronate scales. It also differs from all other Gymnophthalmidae, except Iphisa, by the presence of two longitudinal rows of ventrals. The new genus differs from Iphisa by having two pairs of enlarged chinshields (one in Iphisa); posterior dorsal scales lanceolate, strongly keeled and not arranged in longitudinal rows (dorsals broad, smooth and forming two longitudinal rows), and lateral scales keeled (smooth). Maximum parsimony, maximum likelihood, and Bayesian phylogenetic analyses based on morphological and molecular data indicate the new species form a clade that is most closely related to Iphisa. We also address several nomenclatural issues and present a revised classification of Gymnophthalmidae.

  4. Evaluation of dynamic row-action maximum likelihood algorithm reconstruction for quantitative 15O brain PET.

    PubMed

    Ibaraki, Masanobu; Sato, Kaoru; Mizuta, Tetsuro; Kitamura, Keishi; Miura, Shuichi; Sugawara, Shigeki; Shinohara, Yuki; Kinoshita, Toshibumi

    2009-09-01

    A modified version of row-action maximum likelihood algorithm (RAMLA) using a 'subset-dependent' relaxation parameter for noise suppression, or dynamic RAMLA (DRAMA), has been proposed. The aim of this study was to assess the capability of DRAMA reconstruction for quantitative (15)O brain positron emission tomography (PET). Seventeen healthy volunteers were studied using a 3D PET scanner. The PET study included 3 sequential PET scans for C(15)O, (15)O(2) and H (2) (15) O. First, the number of main iterations (N (it)) in DRAMA was optimized in relation to image convergence and statistical image noise. To estimate the statistical variance of reconstructed images on a pixel-by-pixel basis, a sinogram bootstrap method was applied using list-mode PET data. Once the optimal N (it) was determined, statistical image noise and quantitative parameters, i.e., cerebral blood flow (CBF), cerebral blood volume (CBV), cerebral metabolic rate of oxygen (CMRO(2)) and oxygen extraction fraction (OEF) were compared between DRAMA and conventional FBP. DRAMA images were post-filtered so that their spatial resolutions were matched with FBP images with a 6-mm FWHM Gaussian filter. Based on the count recovery data, N (it) = 3 was determined as an optimal parameter for (15)O PET data. The sinogram bootstrap analysis revealed that DRAMA reconstruction resulted in less statistical noise, especially in a low-activity region compared to FBP. Agreement of quantitative values between FBP and DRAMA was excellent. For DRAMA images, average gray matter values of CBF, CBV, CMRO(2) and OEF were 46.1 +/- 4.5 (mL/100 mL/min), 3.35 +/- 0.40 (mL/100 mL), 3.42 +/- 0.35 (mL/100 mL/min) and 42.1 +/- 3.8 (%), respectively. These values were comparable to corresponding values with FBP images: 46.6 +/- 4.6 (mL/100 mL/min), 3.34 +/- 0.39 (mL/100 mL), 3.48 +/- 0.34 (mL/100 mL/min) and 42.4 +/- 3.8 (%), respectively. DRAMA reconstruction is applicable to quantitative (15)O PET study and is superior to conventional FBP in terms of image quality.

  5. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  6. An FPGA-Based Real-Time Maximum Likelihood 3D Position Estimation for a Continuous Crystal PET Detector

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Xiao, Yong; Cheng, Xinyi; Li, Deng; Wang, Liwei

    2016-02-01

    For the continuous crystal-based positron emission tomography (PET) detector built in our lab, a maximum likelihood algorithm adapted for implementation on a field programmable gate array (FPGA) is proposed to estimate the three-dimensional (3D) coordinate of interaction position with the single-end detected scintillation light response. The row-sum and column-sum readout scheme organizes the 64 channels of photomultiplier (PMT) into eight row signals and eight column signals to be readout for X- and Y-coordinates estimation independently. By the reference events irradiated in a known oblique angle, the probability density function (PDF) for each depth-of-interaction (DOI) segment is generated, by which the reference events in perpendicular irradiation are assigned to DOI segments for generating the PDFs for X and Y estimation in each DOI layer. Evaluated by the experimental data, the algorithm achieves an average X resolution of 1.69 mm along the central X-axis, and DOI resolution of 3.70 mm over the whole thickness (0-10 mm) of crystal. The performance improvements from 2D estimation to the 3D algorithm are also presented. Benefiting from abundant resources of FPGA and a hierarchical storage arrangement, the whole algorithm can be implemented into a middle-scale FPGA. By a parallel structure in pipelines, the 3D position estimator on the FPGA can achieve a processing throughput of 15 M events/s, which is sufficient for the requirement of real-time PET imaging.

  7. Safety of Tofacitinib in the Treatment of Rheumatoid Arthritis in Latin America Compared With the Rest of the World Population.

    PubMed

    Castañeda, Oswaldo M; Romero, Felix J; Salinas, Ariel; Citera, Gustavo; Mysler, Eduardo; Rillo, Oscar; Radominski, Sebastiao C; Cardiel, Mario H; Jaller, Juan J; Alvarez-Moreno, Carlos; Ponce de Leon, Dario; Castelli, Graciela; García, Erika G; Kwok, Kenneth; Rojo, Ricardo

    2017-06-01

    Rheumatoid arthritis (RA) is a chronic, autoimmune disease characterized by joint destruction. Tofacitinib is an oral Janus kinase inhibitor for the treatment of RA. This post hoc analysis assessed the safety of tofacitinib in Latin American (LA) patients with RA versus the Rest of World (RoW) population. Data were pooled from 14 clinical studies of tofacitinib: six Phase 2, six Phase 3 and two long-term extension studies. Incidence rates (IRs; patients with events/100 patient-years of treatment exposure) were calculated for safety events of special interest combined across tofacitinib doses. 95% confidence intervals (CI) for IRs were calculated using the maximum likelihood method. Descriptive comparisons were made between LA and RoW (excluding LA) populations. This analysis included data from 984 LA patients and 4687 RoW patients. IRs for safety events of special interest were generally similar between LA and RoW populations, with overlapping 95% CIs. IRs for discontinuation due to adverse events, serious infections, tuberculosis, all herpes zoster (HZ), serious HZ, malignancies (excluding non-melanoma skin cancer) and major adverse cardiovascular events were numerically lower for LA versus RoW patients; IR for mortality was numerically higher. No lymphoma was reported in the LA population versus eight cases in the RoW population. Exposure (extent and length) was lower in the LA population (2148.33 patient-years [mean = 2.18 years]) versus RoW (10515.68 patient-years [mean = 2.24 years]). This analysis of pooled data from clinical studies of tofacitinib in patients with RA demonstrates that tofacitinib has a consistent safety profile across LA and RoW patient populations.

  8. A new species of Cyrtodactylus (Squamata: Gekkonidae) and the first record of C. otai from Son La Province, Vietnam.

    PubMed

    Nguyen, Truong Quang; Pham, Anh VAN; Ziegler, Thomas; Ngo, Hanh Thi; LE, Minh Duc

    2017-10-30

    We describe a new species of Cyrtodactylus on the basis of four specimens collected from the limestone karst forest of Phu Yen District, Son La Province, Vietnam. Cyrtodactylus sonlaensis sp. nov. is distinguished from the remaining Indochinese bent-toed geckos by a combination of the following characters: maximum SVL of 83.2 mm; dorsal tubercles in 13-15 irregular rows; ventral scales in 34-42 rows; ventrolateral folds prominent without interspersed tubercles; enlarged femoral scales 15-17 on each thigh; femoral pores 14-15 on each thigh in males, absent in females; precloacal pores 8, in a continuous row in males, absent in females; postcloacal tubercles 2 or 3; lamellae under toe IV 18-21; dorsal head with dark brown markings, in oval and arched shapes; nuchal loop discontinuous; dorsum with five brown bands between limb insertions, third and fourth bands discontinuous; subcaudal scales distinctly enlarged. In phylogenetic analyses, the new species is nested in a clade consisting of C. huongsonensis and C. soni from northern Vietnam and C. cf. pulchellus from Malaysia based on maximum likelihood and Bayesian analyses. In addition, we record Cyrtodactylus otai Nguyen, Le, Pham, Ngo, Hoang, Pham & Ziegler for the first time from Son La Province based on specimens collected from Van Ho District.

  9. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION

    PubMed Central

    Allen, Genevera I.; Tibshirani, Robert

    2015-01-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823

  10. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    PubMed

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  11. Description and Phylogeny of Urostyla grandis wiackowskii subsp. nov. (Ciliophora, Hypotricha) from an Estuarine Mangrove in Brazil.

    PubMed

    Paiva, Thiago da Silva; Shao, Chen; Fernandes, Noemi Mendes; Borges, Bárbara do Nascimento; da Silva-Neto, Inácio Domingos

    2016-01-01

    Interphase specimens, aspects of physiological reorganization and divisional morphogenesis were investigated in a strain of a hypotrichous ciliate highly similar to Urostyla grandis Ehrenberg, (type species of Urostyla), collected from a mangrove area in the estuary of the Paraíba do Sul river (Rio de Janeiro, Brazil). The results revealed that albeit interphase specimens match with the known morphologic variability in U. grandis, morphogenetic processes have conspicuous differences. Parental adoral zone is entirely renewed during morphogenesis, and marginal cirri exhibit a unique combination of developmental modes, in which left marginal rows originate from multiple anlagen arising from innermost left marginal cirral row, whereas right marginal ciliature originates from individual within-row anlagen. Based on such characteristics, a new subspecies, namely U. grandis wiackowskii subsp. nov. is proposed, and consequently, U. grandis grandis Ehrenberg, stat. nov. is established. Bayesian and maximum-likelihood analyses of the 18S rDNA unambiguously placed U. grandis wiackowskii as adelphotaxon of a cluster formed by other U. grandis sequences. The implications of such findings to the systematics of Urostyla are discussed. © 2015 The Author(s) Journal of Eukaryotic Microbiology © 2015 International Society of Protistologists.

  12. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  13. Master teachers' responses to twenty literacy and science/mathematics practices in deaf education.

    PubMed

    Easterbrooks, Susan R; Stephenson, Brenda; Mertens, Donna

    2006-01-01

    Under a grant to improve outcomes for students who are deaf or hard of hearing awarded to the Association of College Educators--Deaf/Hard of Hearing, a team identified content that all teachers of students who are deaf and hard of hearing must understand and be able to teach. Also identified were 20 practices associated with content standards (10 each, literacy and science/mathematics). Thirty-seven master teachers identified by grant agents rated the practices on a Likert-type scale indicating the maximum benefit of each practice and maximum likelihood that they would use the practice, yielding a likelihood-impact analysis. The teachers showed strong agreement on the benefits and likelihood of use of the rated practices. Concerns about implementation of many of the practices related to time constraints and mixed-ability classrooms were themes of the reviews. Actions for teacher preparation programs were recommended.

  14. Real-Time Imaging System for the OpenPET

    NASA Astrophysics Data System (ADS)

    Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga

    2012-02-01

    The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.

  15. Don’t Rock the Boat: How Antiphase Crew Coordination Affects Rowing

    PubMed Central

    de Brouwer, Anouk J.; de Poel, Harjo J.; Hofmijster, Mathijs J.

    2013-01-01

    It is generally accepted that crew rowing requires perfect synchronization between the movements of the rowers. However, a long-standing and somewhat counterintuitive idea is that out-of-phase crew rowing might have benefits over in-phase (i.e., synchronous) rowing. In synchronous rowing, 5 to 6% of the power produced by the rower(s) is lost to velocity fluctuations of the shell within each rowing cycle. Theoretically, a possible way for crews to increase average boat velocity is to reduce these fluctuations by rowing in antiphase coordination, a strategy in which rowers perfectly alternate their movements. On the other hand, the framework of coordination dynamics explicates that antiphase coordination is less stable than in-phase coordination, which may impede performance gains. Therefore, we compared antiphase to in-phase crew rowing performance in an ergometer experiment. Nine pairs of rowers performed a two-minute maximum effort in-phase and antiphase trial at 36 strokes min−1 on two coupled free-floating ergometers that allowed for power losses to velocity fluctuations. Rower and ergometer kinetics and kinematics were measured during the trials. All nine pairs easily acquired antiphase rowing during the warm-up, while one pair’s coordination briefly switched to in-phase during the maximum effort trial. Although antiphase interpersonal coordination was indeed less accurate and more variable, power production was not negatively affected. Importantly, in antiphase rowing the decreased power loss to velocity fluctuations resulted in more useful power being transferred to the ergometer flywheels. These results imply that antiphase rowing may indeed improve performance, even without any experience with antiphase technique. Furthermore, it demonstrates that although perfectly synchronous coordination may be the most stable, it is not necessarily equated with the most efficient or optimal performance. PMID:23383024

  16. Don't rock the boat: how antiphase crew coordination affects rowing.

    PubMed

    de Brouwer, Anouk J; de Poel, Harjo J; Hofmijster, Mathijs J

    2013-01-01

    It is generally accepted that crew rowing requires perfect synchronization between the movements of the rowers. However, a long-standing and somewhat counterintuitive idea is that out-of-phase crew rowing might have benefits over in-phase (i.e., synchronous) rowing. In synchronous rowing, 5 to 6% of the power produced by the rower(s) is lost to velocity fluctuations of the shell within each rowing cycle. Theoretically, a possible way for crews to increase average boat velocity is to reduce these fluctuations by rowing in antiphase coordination, a strategy in which rowers perfectly alternate their movements. On the other hand, the framework of coordination dynamics explicates that antiphase coordination is less stable than in-phase coordination, which may impede performance gains. Therefore, we compared antiphase to in-phase crew rowing performance in an ergometer experiment. Nine pairs of rowers performed a two-minute maximum effort in-phase and antiphase trial at 36 strokes min(-1) on two coupled free-floating ergometers that allowed for power losses to velocity fluctuations. Rower and ergometer kinetics and kinematics were measured during the trials. All nine pairs easily acquired antiphase rowing during the warm-up, while one pair's coordination briefly switched to in-phase during the maximum effort trial. Although antiphase interpersonal coordination was indeed less accurate and more variable, power production was not negatively affected. Importantly, in antiphase rowing the decreased power loss to velocity fluctuations resulted in more useful power being transferred to the ergometer flywheels. These results imply that antiphase rowing may indeed improve performance, even without any experience with antiphase technique. Furthermore, it demonstrates that although perfectly synchronous coordination may be the most stable, it is not necessarily equated with the most efficient or optimal performance.

  17. Computationally derived rules for persistence of C60 nanowires on recumbent pentacene bilayers.

    PubMed

    Cantrell, Rebecca A; James, Christine; Clancy, Paulette

    2011-08-16

    The tendency for C(60) nanowires to persist on two monolayers of recumbent pentacene is studied using molecular dynamics (MD) simulations. A review of existing experimental literature for the tilt angle adopted by pentacene on noble metal surfaces shows that studies cover a limited range from 55° to 90°, motivating simulation studies of essentially the entire range of tilt angles (10°-90°) to predict the optimum surface tilt angle for C(60) nanowire formation. The persistence of a 1D nanowire depends sensitively on this tilt angle, the amount of initial tensile strain, and the presence of surface step edges. At room temperature, C(60) nanowires oriented along the pentacene short axes persist for several nanoseconds and are more likely to occur if they reside between, or within, pentacene rows for ϕ ≤ ∼60°. The likelihood of this persistence increases the smaller the tilt angle. Nanowires oriented along the long axes of pentacene molecules are unlikely to form. The limit of stability of nanowires was tested by raising the temperature to 400 K. Nanowires located between pentacene rows survived this temperature rise, but those located initially within pentacene rows are only stable in the range ϕ(1) = 30°-50°. Flatter pentacene surfaces, that is, tilt angles above about 60°, are subject to disorder caused by C(60) molecules "burrowing" into the pentacene surface. An initial strain of 5% applied to the C(60) nanowires significantly decreases the likelihood of nanowire persistence. In contrast, any appreciable surface roughness, even by half a monolayer in height of a third pentacene monolayer, strongly enhances the likelihood of nanowire formation due to the strong binding energy of C(60) molecules to step edges.

  18. Rotation sensitivity analysis of a two-dimensional array of coupled resonators

    NASA Astrophysics Data System (ADS)

    Zamani Aghaie, Kiarash; Vigneron, Pierre-Baptiste; Digonnet, Michel J. F.

    2015-03-01

    In this paper, we study the rotation sensitivity of a gyroscope made of a two-dimensional array of coupled resonators consisting of N columns of one-dimensional coupled resonant optical waveguides (CROWs) connected by two bus waveguides, each CROW consisting of M identical ring resonators. We show that the maximum rotation sensitivity of this structure is a strong function of the parity of the number of rows M. For an odd number of rows, and when the number of columns is small, the maximum sensitivity is high, and it is slightly lower than the maximum sensitivity of a single-ring resonator with two input/output waveguides (the case M = N = 1), which is a resonant waveguide optical gyroscope (RWOG). For an even M and small N, the maximum sensitivity is much lower than that of the RWOG. Increasing the number columns N increases the sensitivity of an even-row 2D CROW sublinearly, as N0.39, up to 30 columns. In comparison, the maximum sensitivity of an RWOG of equal area increases faster, as √N. The sensitivity of the 2D CROW therefore always lags behind that of the RWOG. For a 2×2 CROW, if the spacing between the columns L is increased sufficiently the maximum sensitivity increases linearly with L due to the presence of a composite Mach- Zehnder interferometer in the structure. However, for equal footprints this sensitivity is also not larger than that of a single-ring resonator. Regardless of the number of rows and columns and the spacing, for the same footprint and propagation loss, a 2D CROW gyroscope is not more sensitive than an RWOG.

  19. Single-row modified mason-allen versus double-row arthroscopic rotator cuff repair: a biomechanical and surface area comparison.

    PubMed

    Nelson, Cory O; Sileo, Michael J; Grossman, Mark G; Serra-Hsu, Frederick

    2008-08-01

    The purpose of this study was to compare the time-zero biomechanical strength and the surface area of repair between a single-row modified Mason-Allen rotator cuff repair and a double-row arthroscopic repair. Six matched pairs of sheep infraspinatus tendons were repaired by both techniques. Pressure-sensitive film was used to measure the surface area of repair for each configuration. Specimens were biomechanically tested with cyclic loading from 20 N to 30 N for 20 cycles and were loaded to failure at a rate of 1 mm/s. Failure was defined at 5 mm of gap formation. Double-row suture anchor fixation restored a mean surface area of 258.23 +/- 69.7 mm(2) versus 148.08 +/- 75.5 mm(2) for single-row fixation, a 74% increase (P = .025). Both repairs had statistically similar time-zero biomechanics. There was no statistical difference in peak-to-peak displacement or elongation during cyclic loading. Single-row fixation showed a higher mean load to failure (110.26 +/- 26.4 N) than double-row fixation (108.93 +/- 21.8 N). This was not statistically significant (P = .932). All specimens failed at the suture-tendon interface. Double-row suture anchor fixation restores a greater percentage of the anatomic footprint when compared with a single-row Mason-Allen technique. The time-zero biomechanical strength was not significantly different between the 2 study groups. This study suggests that the 2 factors are independent of each other. Surface area and biomechanical strength of fixation are 2 independent factors in the outcome of rotator cuff repair. Maximizing both factors may increase the likelihood of complete tendon-bone healing and ultimately improve clinical outcomes. For smaller tears, a single-row modified Mason-Allen suture technique may provide sufficient strength, but for large amenable tears, a double row can provide both strength and increased surface area for healing.

  20. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  1. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  2. Spatial distribution pattern of termite in Endau Rompin Plantation

    NASA Astrophysics Data System (ADS)

    Jalaludin, Nur-Atiqah; Rahim, Faszly

    2015-09-01

    We censused 18 field blocks approximately 190 ha with total of 28,604 palms in a grid of 2×4 palms from July 2011 to March 2013. The field blocks comprise of rows of palm trees, harvesting paths, field drains and stacking rows with maximum of 30 palms per row, planted about 9 m apart, alternately in maximum of 80 rows. SADIE analysis generating index of aggregation, Ia, local clustering value, Vi and local gap value, Vj is adopted to estimate spatial pattern. The patterns were then presented in contour map using Surfer 12 software. The patterns produced associated with factors i.e. habitat disturbance, habitat fragmentation and resources affecting nesting and foraging activities. Result shows that field blocks with great habitat disturbance recorded highest dead palms and termites hits. Blocks located far from the main access road recorded less than 2% palms with termite hits. This research may provide ecological data on termite spatial pattern in oil palm ecosystem.

  3. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  4. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavel, D.T.

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  5. Evaluation of the fuselage lap joint fatigue and terminating action repair

    NASA Technical Reports Server (NTRS)

    Samavedam, Gopal; Thomson, Douglas; Jeong, David Y.

    1994-01-01

    Terminating action is a remedial repair which entails the replacement of shear head countersunk rivets with universal head rivets which have a larger shank diameter. The procedure was developed to eliminate the risk of widespread fatigue damage (WFD) in the upper rivet row of a fuselage lap joint. A test and evaluation program has been conducted by Foster-Miller, Inc. (FMI) to evaluate the terminating action repair of the upper rivet row of a commercial aircraft fuselage lap splice. Two full scale fatigue tests were conducted on fuselage panels using the growth of fatigue cracks in the lap joint. The second test was performed to evaluate the effectiveness of the terminating action repair. In both tests, cyclic pressurization loading was applied to the panels while crack propagation was recorded at all rivet locations at regular intervals to generate detailed data on conditions of fatigue crack initiation, ligament link-up, and fuselage fracture. This program demonstrated that the terminating action repair substantially increases the fatigue life of a fuselage panel structure and effectively eliminates the occurrence of cracking in the upper rivet row of the lap joint. While high cycle crack growth was recorded in the middle rivet row during the second test, failure was not imminent when the test was terminated after cycling to well beyond the service life. The program also demonstrated that the initiation, propagation, and linkup of WFD in full-scale fuselage structures can be simulated and quantitatively studied in the laboratory. This paper presents an overview of the testing program and provides a detailed discussion of the data analysis and results. Crack distribution and propagation rates and directions as well as frequency of cracking are presented for both tests. The progression of damage to linkup of adjacent cracks and to eventual overall panel failure is discussed. In addition, an assessment of the effectiveness of the terminating action repair and the occurrence of cracking in the middle rivet row is provided, and conclusions of practical interest are drawn.

  6. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  7. A low complexity reweighted proportionate affine projection algorithm with memory and row action projection

    NASA Astrophysics Data System (ADS)

    Liu, Jianming; Grant, Steven L.; Benesty, Jacob

    2015-12-01

    A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.

  8. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  9. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  10. A biomechanical analysis of point of failure during lateral-row tensioning in transosseous-equivalent rotator cuff repair.

    PubMed

    Dierckman, Brian D; Goldstein, Jordan L; Hammond, Kyle E; Karas, Spero G

    2012-01-01

    The purpose of this study was to determine the maximum load and point of failure of the construct during tensioning of the lateral row of a transosseous-equivalent (TOE) rotator cuff repair. In 6 fresh-frozen human shoulders, a TOE rotator cuff repair was performed, with 1 suture from each medial anchor passed through the tendon and tied in a horizontal mattress pattern. One of 2 limbs from each of 2 medial anchors was pulled laterally over the tendon. After preparation of the lateral bone for anchor placement, the 2 limbs were passed through the polyether ether ketone (PEEK) eyelet of a knotless anchor and tied to a tensiometer. The lateral anchor was placed into the prepared bone tunnel but not fully seated. Tensioning of the lateral-row repair was simulated by pulling the tensiometer to tighten the suture limbs as they passed through the eyelet of the knotless anchor. The mode of failure and maximum tension were recorded. The procedure was then repeated for the second lateral-row anchor. The mean load to failure during lateral-row placement in the TOE model was 80.8 ± 21.0 N (median, 83 N; range, 27.2 to 115.8 N). There was no statistically significant difference between load to failure during lateral-row tensioning for the anterior and posterior anchors (P = .84). Each of the 12 constructs failed at the eyelet of the lateral anchor. Retrieval analysis showed no failure of the medial anchors, no medial suture cutout through the rotator cuff tendon, and no signs of gapping at the repair site. Our results suggest that the medial-row repair does not appear vulnerable during tensioning of the lateral row of a TOE rotator cuff repair with the implants tested. However, surgeons should exercise caution when tensioning the lateral row, especially when lateral-row anchors with PEEK eyelets are implemented. For this repair construct, the findings suggest that although the medial row is not vulnerable during lateral-row tensioning of a TOE rotator cuff repair, lateral-row anchors with PEEK eyelets appear vulnerable to early failure. Copyright © 2012 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  11. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  12. Re: Penetration Behavior of Opposed Rows of Staggered Secondary Air Jets Depending on Jet Penetration Coefficient and Momentum Flux Ratio

    NASA Technical Reports Server (NTRS)

    Holdeman, James D.

    2016-01-01

    The purpose of this article is to explain why the extension of the previously published C = (S/Ho)sqrt(J) scaling for opposed rows of staggered jets wasn't directly successful in the study by Choi et al. (2016). It is not surprising that staggered jets from opposite sides do not pass each other at the expected C value, because Ho/D and sqrt(J) are much larger than the maximum in previous studies. These, and large x/D's, tend to suggest development of 2-dimensional flow. Although there are distinct optima for opposed rows of in-line jets, single-side injection, and opposed rows of staggered jets based on C, opposed rows of staggered jets provide as good or better mixing performance, at any C value, than opposed rows of in-line jets or jets from single-side injection.

  13. Numerical study of aero-excitation of steam-turbine rotor blade self-oscillations

    NASA Astrophysics Data System (ADS)

    Galaev, S. A.; Makhnov, V. Yu.; Ris, V. V.; Smirnov, E. M.

    2018-05-01

    Blade aero-excitation increment is evaluated by numerical solution of the full 3D unsteady Reynolds-averaged Navier-Stokes equations governing wet steam flow in a powerful steam-turbine last stage. The equilibrium wet steam model was adopted. Blade surfaces oscillations are defined by eigen-modes of a row of blades bounded by a shroud. Grid dependency study was performed with a reduced model being a set of blades multiple an eigen-mode nodal diameter. All other computations were carried out for the entire blade row. Two cases are considered, with an original-blade row and with a row of modified (reinforced) blades. Influence of eigen-mode nodal diameter and blade reinforcing on aero-excitation increment is analyzed. It has been established, in particular, that maximum value of the aero-excitation increment for the reinforced-blade row is two times less as compared with the original-blade row. Generally, results of the study point definitely to less probability of occurrence of blade self-oscillations in case of the reinforced blade-row.

  14. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  15. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  16. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  17. New applications of maximum likelihood and Bayesian statistics in macromolecular crystallography.

    PubMed

    McCoy, Airlie J

    2002-10-01

    Maximum likelihood methods are well known to macromolecular crystallographers as the methods of choice for isomorphous phasing and structure refinement. Recently, the use of maximum likelihood and Bayesian statistics has extended to the areas of molecular replacement and density modification, placing these methods on a stronger statistical foundation and making them more accurate and effective.

  18. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    PubMed

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  19. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hermeston, Mark W.

    Franklin County noxious weed management along BPA rights-of-ways, transmission structures, roads, and switches listed in Attachment 1. Attachment 1 identifies the ROW, ROW width, and ROW length of the proposed action. Includes all BPA 115kV, 230kV, and 500 kV ROWs in Franklin County, Washington. BPA proposes to clear noxious and/or unwanted low-growing vegetation in all BPA ROWs in Franklin County, Washington. In a cooperative effort, BPA, through landowners and the Franklin County Weed Control Board, plan to eradicate noxious plants and other unwanted, low-growing vegetation within the ROW width including all structures and access roads. BPA’s overall goal is tomore » eradicate all noxious and unwanted vegetation through chemical treatment and reseeding. Selective and nonselective chemical treatment using spot, local and broadcast methods. All work will be executed in accordance with the National Electrical Safety Code and BPA standards. Work is to begin in March 2002.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hermeston, Mark W.

    Benton County noxious weed management along BPA rights-of-ways, transmission structures, roads, and switches listed in Attachment 1. Attachment 1 identifies the ROW, ROW width, and ROW length of the proposed action. Includes all BPA 115kV, 230kV, 345kV and 500 kV ROWs in Benton County, Washington. BPA proposes to clear noxious and/or unwanted low-growing vegetation in all BPA ROWs in Benton County, Washington. In a cooperative effort, BPA, through landowners and the Benton County Weed Control Board, plan to eradicate noxious plants and other unwanted, low-growing vegetation within the ROW width including all structures and access roads. BPA’s overall goal ismore » to eradicate all noxious and unwanted vegetation through chemical treatment and reseeding. Selective and nonselective chemical treatment using spot, local and broadcast methods. All work will be executed in accordance with the National Electrical Safety Code and BPA standards. Work is to begin in March 2002.« less

  2. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  3. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  4. Predicting one repetition maximum equations accuracy in paralympic rowers with motor disabilities.

    PubMed

    Schwingel, Paulo A; Porto, Yuri C; Dias, Marcelo C M; Moreira, Mônica M; Zoppi, Cláudio C

    2009-05-01

    Predicting one repetition maximum equations accuracy in paralympic rowers Resistance training intensity is prescribed using percentiles of the maximum strength, defined as the maximum tension generated for a muscle or muscular group. This value is found through the application of the one maximal repetition (1RM) test. One maximal repetition test demands time and still is not appropriate for some populations because of the risk it offers. In recent years, the prediction of maximal strength, through predicting equations, has been used to prevent the inconveniences of the 1RM test. The purpose of this study was to verify the accuracy of 12 1RM predicting equations for disabled rowers. Nine male paralympic rowers (7 one-leg amputated rowers and 2 cerebral paralyzed rowers; age, 30 +/- 7.9 years; height, 175.1 +/- 5.9 cm; weight, 69 +/- 13.6 kg) performed 1RM test for lying T-bar row and flat barbell bench press exercises to determine upper-body strength and leg press exercise to determine lower-body strength. One maximal repetition test was performed, and based on submaximal repetitions loads, several linear and exponential equations models were tested with regard of their accuracy. We did not find statistical differences for lying T-bar row and bench press exercises between measured and predicted 1RM values (p = 0.84 and 0.23 for lying T-bar row and flat barbell bench press, respectively); however, leg press exercise reached a high significant difference between measured and predicted values (p < 0.01). In conclusion, rowers with motor disabilities tolerate 1RM testing procedures, and predicting 1RM equations are accurate for bench press and lying T-bar row, but not for leg press, in this kind of athlete.

  5. A maximum likelihood map of chromosome 1.

    PubMed Central

    Rao, D C; Keats, B J; Lalouel, J M; Morton, N E; Yee, S

    1979-01-01

    Thirteen loci are mapped on chromosome 1 from genetic evidence. The maximum likelihood map presented permits confirmation that Scianna (SC) and a fourteenth locus, phenylketonuria (PKU), are on chromosome 1, although the location of the latter on the PGM1-AMY segment is uncertain. Eight other controversial genetic assignments are rejected, providing a practical demonstration of the resolution which maximum likelihood theory brings to mapping. PMID:293128

  6. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  7. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  8. Comparison of Maximum Likelihood Estimation Approach and Regression Approach in Detecting Quantitative Trait Lco Using RAPD Markers

    Treesearch

    Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine

    1999-01-01

    Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...

  9. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  10. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  11. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  12. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  13. High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays.

    PubMed

    Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok

    2016-01-26

    The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart.

  14. An evaluation of several different classification schemes - Their parameters and performance. [maximum likelihood decision for crop identification

    NASA Technical Reports Server (NTRS)

    Scholz, D.; Fuhs, N.; Hixson, M.

    1979-01-01

    The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.

  15. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  16. Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference

    DTIC Science & Technology

    1990-11-01

    Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR

  17. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  18. 75 FR 38056 - Airworthiness Directives; McDonnell Douglas Corporation Model MD-90-30 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-01

    ... (Row A) of the support fittings of the left and right engine aft mount with new fasteners. The service... fasteners (Row A) of the support fittings of the left and right engine aft mounts with new fasteners, in... fittings of the left and right engines, and corrective actions if necessary. This proposed AD would instead...

  19. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  20. The physical demands of Super 14 rugby union.

    PubMed

    Austin, Damien; Gabbett, Tim; Jenkins, David

    2011-05-01

    The purpose of the present study was to describe the match-play demands of professional rugby union players competing in Super 14 matches during the 2008 and 2009 seasons. The movements of 20 players from Super 14 rugby union team during the 2008 and 2009 seasons were video recorded. Using time-motion analysis (TMA), five players from four positional groups (front-row forwards, back-row forwards, inside backs and outside backs) were assessed. Players covered between 4218 m and 6389 m during the games. The maximum distances covered in a game by the four groups were: front row forwards (5139 m), back row forwards, (5422 m), inside backs (6389 m) and outside backs (5489 m). The back row forwards spent the greatest amount of time in high-intensity exercise (1190 s), followed by the front row forwards (1015 s), the inside backs (876 s) and the outside backs (570 s). Average distances covered in individual sprint efforts were: front row forwards (16 m), back row forwards (14 m), inside backs (17 m) and outside backs (18 m). Work to rest ratios of 1:4, 1:4, 1:5, and 1:6 were found for the front row and back row forwards, and inside and outside backs respectively. The Super 14 competition during 2008 and 2009, have resulted in an increase in total high-intensity activities, sprint frequency, and work to rest ratios across all playing positions. For players and teams to remain competitive in Super 14 rugby, training (including recovery practices) should reflect these current demands. Copyright © 2011 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  1. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  2. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  3. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  4. High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays

    PubMed Central

    Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok

    2016-01-01

    The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart. PMID:26821029

  5. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  6. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  7. Pixel-By Estimation of Scene Motion in Video

    NASA Astrophysics Data System (ADS)

    Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.

    2017-05-01

    The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.

  8. Temporal resolution measurement of 128-slice dual source and 320-row area detector computed tomography scanners in helical acquisition mode using the impulse method.

    PubMed

    Hara, Takanori; Urikura, Atsushi; Ichikawa, Katsuhiro; Hoshino, Takashi; Nishimaru, Eiji; Niwa, Shinji

    2016-04-01

    To analyse the temporal resolution (TR) of modern computed tomography (CT) scanners using the impulse method, and assess the actual maximum TR at respective helical acquisition modes. To assess the actual TR of helical acquisition modes of a 128-slice dual source CT (DSCT) scanner and a 320-row area detector CT (ADCT) scanner, we assessed the TRs of various acquisition combinations of a pitch factor (P) and gantry rotation time (R). The TR of the helical acquisition modes for the 128-slice DSCT scanner continuously improved with a shorter gantry rotation time and greater pitch factor. However, for the 320-row ADCT scanner, the TR with a pitch factor of <1.0 was almost equal to the gantry rotation time, whereas with pitch factor of >1.0, it was approximately one half of the gantry rotation time. The maximum TR values of single- and dual-source helical acquisition modes for the 128-slice DSCT scanner were 0.138 (R/P=0.285/1.5) and 0.074s (R/P=0.285/3.2), and the maximum TR values of the 64×0.5- and 160×0.5-mm detector configurations of the helical acquisition modes for the 320-row ADCT scanner were 0.120 (R/P=0.275/1.375) and 0.195s (R/P=0.3/0.6), respectively. Because the TR of a CT scanner is not accurately depicted in the specifications of the individual scanner, appropriate acquisition conditions should be determined based on the actual TR measurement. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  10. Closed-loop carrier phase synchronization techniques motivated by likelihood functions

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Hinedi, S.; Simon, M.

    1994-01-01

    This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.

  11. Experimental wave attenuation study over flexible plants on a submerged slope

    NASA Astrophysics Data System (ADS)

    Yin, Zegao; Yang, Xiaoyu; Xu, Yuanzhao; Ding, Meiling; Lu, Haixiang

    2017-12-01

    Using plants is a kind of environmentally-friendly coastal protection to attenuate wave energy. In this paper, a set of experiments were conducted to investigate the wave attenuation performance using flexible grasses on a submerged slope, and the wave attenuation coefficient for these experiments was calculated for different still water depths, slope and grass configurations. It was found that the slope plays a significant role in wave attenuation. The wave attenuation coefficient increases with increasing relative row number and relative density. For a small relative row number, the two configurations from the slope top to its toe and from the slope toe to its top performed equally to a large extent. For a medium relative row number, the configuration from the slope toe to its top performed more poorly than that from the slope top to its toe; however, it performed better than that from the slope top to its toe for a high relative row number. With a single row of grasses close to the slope top from the slope toe, the wave attenuation coefficient shows double peaks. With increasing grass rows or still water depth, the grass location corresponding to the maximum wave attenuation coefficient is close to the slope top. The dimensional analysis and the least square method were used to derive an empirical equation of the wave attenuation coefficient considering the effect of relative density, the slope, the relative row number and the relative location of the middle row, and the equation was validated to experimental data.

  12. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  13. Low-complexity approximations to maximum likelihood MPSK modulation classification

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2004-01-01

    We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.

  14. Chemical composition and biological activity of essential oils of Origanum vulgare L. subsp. vulgare L. under different growth conditions.

    PubMed

    De Falco, Enrica; Mancini, Emilia; Roscigno, Graziana; Mignola, Enrico; Taglialatela-Scafati, Orazio; Senatore, Felice

    2013-12-04

    This research was aimed at investigating the essential oil production, chemical composition and biological activity of a crop of pink flowered oregano (Origanum vulgare L. subsp. vulgare L.) under different spatial distribution of the plants (single and binate rows). This plant factor was shown to affect its growth, soil covering, fresh biomass, essential oil amount and composition. In particular, the essential oil percentage was higher for the binate row treatment at the full bloom. The chemical composition of the oils obtained by hydrodistillation was fully characterized by GC and GC-MS. The oil from plants grown in single rows was rich in sabinene, while plants grown in double rows were richer in ocimenes. The essential oils showed antimicrobial action, mainly against Gram-positive pathogens and particularly Bacillus cereus and B. subtilis.

  15. Locomotion on the water surface: hydrodynamic constraints on rowing velocity require a gait change

    PubMed

    Suter; Wildman

    1999-10-01

    Fishing spiders, Dolomedes triton (Araneae, Pisauridae), propel themselves across the water surface using two gaits: they row with four legs at sustained velocities below 0.2 m s(-)(1) and they gallop with six legs at sustained velocities above 0.3 m s(-)(1). Because, during rowing, most of the horizontal thrust is provided by the drag of the leg and its associated dimple as both move across the water surface, the integrity of the dimple is crucial. We used a balance, incorporating a biaxial clinometer as the transducer, to measure the horizontal thrust forces on a leg segment subjected to water moving past it in non-turbulent flow. Changes in the horizontal forces reflected changes in the status of the dimple and showed that a stable dimple could exist only under conditions that combined low flow velocity, shallow leg-segment depth and a long perimeter of the interface between the leg segment and the water. Once the dimple disintegrated, leaving the leg segment submerged, less drag was generated. Therefore, the disintegration of the dimple imposes a limit on the efficacy of rowing with four legs. The limited degrees of freedom in the leg joints (the patellar joints move freely in the vertical plane but allow only limited flexion in other planes) impose a further constraint on rowing by restricting the maximum leg-tip velocity (to approximately 33 % of that attained by the same legs during galloping). This confines leg-tip velocities to a range at which maintenance of the dimple is particularly important. The weight of the spider also imposes constraints on the efficacy of rowing: because the drag encountered by the leg-cum-dimple is proportional to the depth of the dimple and because dimple depth is proportional to the supported weight, only spiders with a mass exceeding 0.48 g can have access to the full range of hydrodynamically possible dimple depths during rowing. Finally, the maximum velocity attainable during rowing is constrained by the substantial drag experienced by the spider during the glide interval between power strokes, drag that is negligible for a galloping spider because, for most of each inter-stroke interval, the spider is airborne. We conclude that both hydrodynamic and anatomical constraints confine rowing spiders to sustained velocities lower than 0.3 m s(-)(1), and that galloping allows spiders to move considerably faster because galloping is free of these constraints.

  16. Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.

  17. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  18. Maximum-likelihood block detection of noncoherent continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1993-01-01

    This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.

  19. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    PubMed

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  20. Maximum likelihood clustering with dependent feature trees

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.

  1. Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E

    NASA Technical Reports Server (NTRS)

    Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie

    2001-01-01

    In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.

  2. Morphological and molecular data for a new species of Pomphorhynchus Monticelli, 1905 (Acanthocephala: Pomphorhynchidae) in the Mexican redhorse Moxostoma austrinum Bean (Cypriniformes: Catostomidae) in central Mexico.

    PubMed

    García-Varela, Martín; Mendoza-Garfias, Berenit; Choudhury, Anindo; Pérez-Ponce de León, Gerardo

    2017-11-01

    Pomphorhynchus purhepechus n. sp. is described from the intestine of the Mexican redhorse Moxostoma austrinum Bean (Catostomidae) in central Mexico. The new species can be distinguished from the other seven described species of Pomphorhynchus Monticelli, 1905 in the Americas by a subspherical proboscis and 14 longitudinal rows with 16-18 hooks each; the third and the fourth row of hooks are alternately longest. Sequences of the mitochondrial cytochrome c oxidase subunit 1 (cox1) gene and the large subunit (LSU) rDNA (including the domains D2-D3) were used to corroborate the morphological distinction between the new species and Pomphorhynchus bulbocolli Linkins in Van Cleave, 1919, a species widely distributed in several freshwater fish species across Canada, USA, and Mexico. The genetic divergence estimated between the new species and the isolates of P. bulbocolli ranged between 13 and 14% for cox1, and between 0.6 and 0.8% for LSU. Maximum likelihood and Bayesian inference analyses of each dataset showed that the isolates of P. bulbocolli parasitising freshwater fishes from three families, the Catostomidae, Cyprinidae and Centrarchidae, represent a separate lineage, and that the acanthocephalans collected from two localities in central Mexico comprise an independent lineage. In addition, our analysis of the genetic variation of P. bulbocolli demonstrates that individuals of this acanthocephalan from different host species are conspecific. Finally, the distribution, host-association, and phylogenetic relationship of the new species, when placed in the context of the region's geological history, suggest that both host and parasite underwent speciation after their ancestors became isolated in Central Mexico.

  3. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  4. Load to Failure and Stiffness

    PubMed Central

    Esquivel, Amanda O.; Duncan, Douglas D.; Dobrasevic, Nikola; Marsh, Stephanie M.; Lemos, Stephen E.

    2015-01-01

    Background: Rotator cuff tendinopathy is a frequent cause of shoulder pain that can lead to decreased strength and range of motion. Failures after using the single-row technique of rotator cuff repair have led to the development of the double-row technique, which is said to allow for more anatomical restoration of the footprint. Purpose: To compare 5 different types of suture patterns while maintaining equality in number of anchors. The hypothesis was that the Mason-Allen–crossed cruciform transosseous-equivalent technique is superior to other suture configurations while maintaining equality in suture limbs and anchors. Study Design: Controlled laboratory study. Methods: A total of 25 fresh-frozen cadaveric shoulders were randomized into 5 suture configuration groups: single-row repair with simple stitch technique; single-row repair with modified Mason-Allen technique; double-row Mason-Allen technique; double-row cross-bridge technique; and double-row suture bridge technique. Load and displacement were recorded at 100 Hz until failure. Stiffness and bone mineral density were also measured. Results: There was no significant difference in peak load at failure, stiffness, maximum displacement at failure, or mean bone mineral density among the 5 suture configuration groups (P < .05). Conclusion: According to study results, when choosing a repair technique, other factors such as number of sutures in the repair should be considered to judge the strength of the repair. Clinical Relevance: Previous in vitro studies have shown the double-row rotator cuff repair to be superior to the single-row repair; however, clinical research does not necessarily support this. This study found no difference when comparing 5 different repair methods, supporting research that suggests the number of sutures and not the pattern can affect biomechanical properties. PMID:26665053

  5. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  6. Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Ramsay, J. O.

    1980-01-01

    Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)

  7. ATAC Autocuer Modeling Analysis.

    DTIC Science & Technology

    1981-01-01

    the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of

  8. Epidemiologic programs for computers and calculators. A microcomputer program for multiple logistic regression by unconditional and conditional maximum likelihood methods.

    PubMed

    Campos-Filho, N; Franco, E L

    1989-02-01

    A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.

  9. Row distance method sowing of forage Kochia, eastern saltwort and winterfat.

    PubMed

    Zadbar, M; Dormanov, D N; Shariph-abad, H Heidari; Dorikov, M; Jalilvand, H

    2007-05-15

    In this study, we used three native range species of eastern saltwort, winterfat and forage Kochia. These species are extremely adapted to dry lands and have high productivity comparison with other forage species. In order to increase range production in poor, dry and sub dry land in the province of Khorasan (Sabzevar) the seeds of these species naturally were sowed. They were sowed individually on rows and mixed of the two by 2 or 3 species on the alternative rows. The research was carried out statistically in Completely Randomized Block Design (CRBD) as a factorial experiment by two factors. The first factor was row distance of seeding (three levels, 50, 75 and 100 cm distance between each row) and the second was kinds of intercropping methods (seven level of individual seeding by three mentioned species and mixed alternative rows of two by 2 and 3 species together) with four replicates (3x7x4). Number of seed was accounted by the number of bushes were germinated or died in each experimental unit. The results showed that maximum abundant of seed germination of all treatments was occurred from late April to late May. Sowing in the row spaces of 50 cm had highly statistically significant production than the ones of 75 and 100 cm spaces. Also, by comparing relative frequency percentage of germinated seeds and relative germinated died seed revealed that individual sowing seed of Salsola orientalis and Eurotia ceratoides, by 50 cm row space in Sabzevar region had better result, respectively, because of lowest mortality of plants and highest productivity of biomass.

  10. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  11. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  12. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  13. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  14. PAMLX: a graphical user interface for PAML.

    PubMed

    Xu, Bo; Yang, Ziheng

    2013-12-01

    This note announces pamlX, a graphical user interface/front end for the paml (for Phylogenetic Analysis by Maximum Likelihood) program package (Yang Z. 1997. PAML: a program package for phylogenetic analysis by maximum likelihood. Comput Appl Biosci. 13:555-556; Yang Z. 2007. PAML 4: Phylogenetic analysis by maximum likelihood. Mol Biol Evol. 24:1586-1591). pamlX is written in C++ using the Qt library and communicates with paml programs through files. It can be used to create, edit, and print control files for paml programs and to launch paml runs. The interface is available for free download at http://abacus.gene.ucl.ac.uk/software/paml.html.

  15. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  16. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  17. Maximum likelihood phase-retrieval algorithm: applications.

    PubMed

    Nahrstedt, D A; Southwell, W H

    1984-12-01

    The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.

  18. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  19. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    PubMed

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  20. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  1. On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2005-01-01

    Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…

  2. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  3. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  4. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  5. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  6. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  7. A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Mayberry, Paul W.

    A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…

  8. The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice

    The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…

  9. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  10. Experimental strategies in carrying out VCU for tobacco crop I: plot design and size.

    PubMed

    Toledo, F H R B; Ramalho, M A P; Pulcinelli, C E; Bruzi, A T

    2013-09-19

    We aimed to establish standards for tobacco Valor de Cultivo e Uso (VCU) in Brazil. We obtained information regarding the size and design of plots of two varietal groups of tobacco (Virginia and Burley). Ten inbred lines of each varietal group were evaluated in a randomized complete block design with four replications. The plot contained 42 plants with six rows of seven columns each. For each experiment plant, considering the position of the respective plant in the plot (row and column) as a reference, cured leaf weight (g/plant), total sugar content (%), and total alkaloid content (%) were determined. The maximum curvature of the variations in coefficients was estimated. Trials with the number of plants per plot ranging from 2 to 41 were simulated. The use of a border was not justified because the interactions between inbred lines x position in the plots were never significant, showing that the behavior of the inbred lines coincided with the different positions. The plant performance varied according to the column position in the plot. To lessen the effect of this factor, the use of plots with more than one row is recommended. Experimental precision, evaluated by the CV%, increased with an increase in plot size; nevertheless, the maximum curvature of the variation coefficient method showed no expressive increase in precision if the number of plants was greater than seven. The result in identification of the best inbred line, in terms of the size of each plot, coincided with the maximum curvature method.

  11. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  12. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  13. 7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...

  14. An evaluation of percentile and maximum likelihood estimators of weibull paremeters

    Treesearch

    Stanley J. Zarnoch; Tommy R. Dell

    1985-01-01

    Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...

  15. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  16. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  17. Unclassified Publications of Lincoln Laboratory, 1 January - 31 December 1990. Volume 16

    DTIC Science & Technology

    1990-12-31

    Apr. 1990 ADA223419 Hopped Communication Systems with Nonuniform Hopping Distributions 880 Bistatic Radar Cross Section of a Fenn, A.J. 2 May1990...EXPERIMENT JA-6241 MS-8424 LUNAR PERTURBATION MAXIMUM LIKELIHOOD ALGORITHM JA-6241 JA-6467 LWIR SPECTRAL BAND MAXIMUM LIKELIHOOD ESTIMATOR JA-6476 MS-8466

  18. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  19. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  20. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  1. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  2. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  3. Comparison of standard maximum likelihood classification and polytomous logistic regression used in remote sensing

    Treesearch

    John Hogland; Nedret Billor; Nathaniel Anderson

    2013-01-01

    Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...

  4. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  5. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  6. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations.

    PubMed

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-06-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.

  7. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  8. Overcoming Barriers to Firewise Actions by Residents. Final Report to Joint Fire Science Program

    Treesearch

    James D. Absher; Jerry J. Vaske; Katie M. Lyon

    2013-01-01

    Encouraging the public to take action (e.g., creating defensible space) that can reduce the likelihood of wildfire damage and decrease the likelihood of injury is a common approach to increasing wildfire safety and damage mitigation. This study was designed to improve our understanding of both individual and community actions that homeowners currently do or might take...

  9. Approximated maximum likelihood estimation in multifractal random walks

    NASA Astrophysics Data System (ADS)

    Løvsletten, O.; Rypdal, M.

    2012-04-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  10. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  11. 12-mode OFDM transmission using reduced-complexity maximum likelihood detection.

    PubMed

    Lobato, Adriana; Chen, Yingkan; Jung, Yongmin; Chen, Haoshuo; Inan, Beril; Kuschnerov, Maxim; Fontaine, Nicolas K; Ryf, Roland; Spinnler, Bernhard; Lankl, Berthold

    2015-02-01

    We report the transmission of 163-Gb/s MDM-QPSK-OFDM and 245-Gb/s MDM-8QAM-OFDM transmission over 74 km of few-mode fiber supporting 12 spatial and polarization modes. A low-complexity maximum likelihood detector is employed to enhance the performance of a system impaired by mode-dependent loss.

  12. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  13. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  14. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  15. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  16. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  17. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  18. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  19. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  20. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    NASA Astrophysics Data System (ADS)

    Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.

    2017-10-01

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ˜21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  1. [Protective role of autotypic contacts under cerebellar neural net injury by toxic doses of NO-generative compounds].

    PubMed

    Samosudova, N V; Reutov, V P; Larionova, N P; Chaĭlakhian, L M

    2005-01-01

    In the present work, cerebellar neural net injury was induced by toxic doses of NO-generative compound (NaNO2). A protective role of glial cells was revealed in such conditions. The present results were compared with those of the previous work concerning the action of high concentration glutamate on the frog cerebellum (Samosudova et al., 1996). In both cases we observed the appearance of spiral-like structures--"wrappers)"--involving several rows of transformed glial processes with smaller width and bridges connecting the inner sides of row (autotypic contact). A statistic analysis was made according to both previous and present data. We calculated the number and width of rows, and intervals between bridges depending on experimental conditions. As the injury increased (stimulation in the NO-presence), the row number in "wrappers" also increased, while the row width and intervals between bridges decreased. The presence of autotypic contacts in glial "wrappers" enables us to suppose the involvement of adhesive proteins--cadherins in its formation. The obtained data suggested that the formation of spiral structures--"wrappers" may be regarded as a compensative-adaptive reaction on the injury of cerebellar neural net glutamate and NO-generative compounds.

  2. Patterns of measles transmission among airplane travelers.

    PubMed

    Edelson, Paul J

    2012-09-01

    With advanced air handling systems on modern aircraft and the high level of measles immunity in many countries, measles infection in air travelers may be considered a low-risk event. However, introduction of measles into countries where transmission has been controlled or eliminated can have substantial consequences both for the use of public health resources and for those still susceptible. In an effort to balance the relatively low likelihood of disease transmission among largely immune travelers and the risk to the public health of the occurrence of secondary cases resulting from importations, criteria in the United States for contact investigations for measles exposures consider contacts to be those passengers who are seated within 2 rows of the index case. However, recent work has shown that cabin air flow may not be as reliable a barrier to the spread of measles virus as previously believed. Along with these new studies, several reports have described measles developing after travel in passengers seated some distance from the index case. To understand better the potential for measles virus to spread on an airplane, reports of apparent secondary cases occurring in co-travelers of passengers with infectious cases of measles were reviewed. Medline™ was searched for articles in all languages from 1946 to week 1 of March 2012, using the search terms "measles [human] or rubeola" and ("aircraft" or "airplane" or "aeroplane" or "aviation" or "travel" or "traveler" or "traveller"); 45 citations were returned. Embase™ was searched from 1988 to week 11 2012, using the same search strategy; 95 citations were returned. Papers were included in this review if they reported secondary cases of measles occurring in persons traveling on an airplane on which a person or persons with measles also flew, and which included the seating location of both the index case(s) and the secondary case(s) on the plane. Nine reports, including 13 index cases and 23 apparent secondary cases on 10 flights, were identified in which transmission on board the aircraft appeared likely and which included seating information for both the index (primary) and secondary cases. Separation between index and secondary cases ranged from adjacent seats to 17 rows, with a median of 6 rows. Three flights had more than one index case aboard. Based on previously published data, it is not possible to say how unusual cases of measles transmission among air travelers beyond the usual zone of contact investigation (the row the index case sat in and 2 rows ahead of or behind that row) may be. The fact that several flights had more than one infectious case aboard and that all but two index cases were in the prodromal phase may be of importance in understanding the wider spread described in several of the reviewed reports. Although the pattern of cabin air flow typical of modern commercial aircraft has been considered highly effective in limiting the airborne spread of microorganisms, concerns have been raised about relying on the operation of these systems to determine exposure risk, as turbulence in the cabin air stream is generated when passengers and crew are aboard, allowing the transmission of infectious agents over many rows. Additionally, the characteristics of some index cases may reflect a greater likelihood of disease transmission. Investigators should continue to examine carefully both aircraft and index-case factors that may influence disease transmission and could serve as indicators on a case-by-case basis to include a broader group of travelers in a contact investigation. Published by Elsevier Ltd.

  3. Unsteady Flows in a Single-Stage Transonic Axial-Flow Fan Stator Row. Ph.D. Thesis - Iowa State Univ.

    NASA Technical Reports Server (NTRS)

    Hathaway, Michael D.

    1986-01-01

    Measurements of the unsteady velocity field within the stator row of a transonic axial-flow fan were acquired using a laser anemometer. Measurements were obtained on axisymmetric surfaces located at 10 and 50 percent span from the shroud, with the fan operating at maximum efficiency at design speed. The ensemble-average and variance of the measured velocities are used to identify rotor-wake-generated (deterministic) unsteadiness and turbulence, respectively. Correlations of both deterministic and turbulent velocity fluctuations provide information on the characteristics of unsteady interactions within the stator row. These correlations are derived from the Navier-Stokes equation in a manner similar to deriving the Reynolds stress terms, whereby various averaging operators are used to average the aperiodic, deterministic, and turbulent velocity fluctuations which are known to be present in multistage turbomachines. The correlations of deterministic and turbulent velocity fluctuations throughout the axial fan stator row are presented. In particular, amplification and attenuation of both types of unsteadiness are shown to occur within the stator blade passage.

  4. Analysis of programming properties and the row-column generation method for 1-norm support vector machines.

    PubMed

    Zhang, Li; Zhou, WeiDa

    2013-12-01

    This paper deals with fast methods for training a 1-norm support vector machine (SVM). First, we define a specific class of linear programming with many sparse constraints, i.e., row-column sparse constraint linear programming (RCSC-LP). In nature, the 1-norm SVM is a sort of RCSC-LP. In order to construct subproblems for RCSC-LP and solve them, a family of row-column generation (RCG) methods is introduced. RCG methods belong to a category of decomposition techniques, and perform row and column generations in a parallel fashion. Specially, for the 1-norm SVM, the maximum size of subproblems of RCG is identical with the number of Support Vectors (SVs). We also introduce a semi-deleting rule for RCG methods and prove the convergence of RCG methods when using the semi-deleting rule. Experimental results on toy data and real-world datasets illustrate that it is efficient to use RCG to train the 1-norm SVM, especially in the case of small SVs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Crop Row Detection in Maize Fields Inspired on the Human Visual Perception

    PubMed Central

    Romeo, J.; Pajares, G.; Montalvo, M.; Guerrero, J. M.; Guijarro, M.; Ribeiro, A.

    2012-01-01

    This paper proposes a new method, oriented to image real-time processing, for identifying crop rows in maize fields in the images. The vision system is designed to be installed onboard a mobile agricultural vehicle, that is, submitted to gyros, vibrations, and undesired movements. The images are captured under image perspective, being affected by the above undesired effects. The image processing consists of two main processes: image segmentation and crop row detection. The first one applies a threshold to separate green plants or pixels (crops and weeds) from the rest (soil, stones, and others). It is based on a fuzzy clustering process, which allows obtaining the threshold to be applied during the normal operation process. The crop row detection applies a method based on image perspective projection that searches for maximum accumulation of segmented green pixels along straight alignments. They determine the expected crop lines in the images. The method is robust enough to work under the above-mentioned undesired effects. It is favorably compared against the well-tested Hough transformation for line detection. PMID:22623899

  6. Performance Evaluation of a New Dedicated Breast PET Scanner Using NEMA NU4-2008 Standards.

    PubMed

    Miyake, Kanae K; Matsumoto, Keiichi; Inoue, Mika; Nakamoto, Yuji; Kanao, Shotaro; Oishi, Tae; Kawase, Shigeto; Kitamura, Keishi; Yamakawa, Yoshiyuki; Akazawa, Ayako; Kobayashi, Tetsuya; Ohi, Junichi; Togashi, Kaori

    2014-07-01

    The aim of this work was to evaluate the performance characteristics of a newly developed dedicated breast PET scanner, according to National Electrical Manufacturers Association (NEMA) NU 4-2008 standards. The dedicated breast PET scanner consists of 4 layers of a 32 × 32 lutetium oxyorthosilicate-based crystal array, a light guide, and a 64-channel position-sensitive photomultiplier tube. The size of a crystal element is 1.44 × 1.44 × 4.5 mm. The detector ring has a large solid angle with a 185-mm aperture and an axial coverage of 155.5 mm. The energy windows at depth of interaction for the first and second layers are 400-800 keV, and those at the third and fourth layers are 100-800 keV. A fixed timing window of 4.5 ns was used for all acquisitions. Spatial resolution, sensitivity, counting rate capabilities, and image quality were evaluated in accordance with NEMA NU 4-2008 standards. Human imaging was performed in addition to the evaluation. Radial, tangential, and axial spatial resolution measured as minimal full width at half maximum approached 1.6, 1.7, and 2.0 mm, respectively, for filtered backprojection reconstruction and 0.8, 0.8, and 0.8 mm, respectively, for dynamic row-action maximum-likelihood algorithm reconstruction. The peak absolute sensitivity of the system was 11.2%. Scatter fraction at the same acquisition settings was 30.1% for the rat-sized phantom. Peak noise-equivalent counting rate and peak true rate for the ratlike phantom was 374 kcps at 25 MBq and 603 kcps at 31 MBq, respectively. In the image-quality phantom study, recovery coefficients and uniformity were 0.04-0.82 and 1.9%, respectively, for standard reconstruction mode and 0.09-0.97 and 4.5%, respectively, for enhanced-resolution mode. Human imaging provided high-contrast images with restricted background noise for standard reconstruction mode and high-resolution images for enhanced-resolution mode. The dedicated breast PET scanner has excellent spatial resolution and high sensitivity. The performance of the dedicated breast PET scanner is considered to be reasonable enough to support its use in breast cancer imaging. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hull, L.C.

    The Prickett and Lonnquist two-dimensional groundwater model has been programmed for the Apple II minicomputer. Both leaky and nonleaky confined aquifers can be simulated. The model was adapted from the FORTRAN version of Prickett and Lonnquist. In the configuration presented here, the program requires 64 K bits of memory. Because of the large number of arrays used in the program, and memory limitations of the Apple II, the maximum grid size that can be used is 20 rows by 20 columns. Input to the program is interactive, with prompting by the computer. Output consists of predicted lead values at themore » row-column intersections (nodes).« less

  8. Butterfly Phonics: Evaluation Report and Executive Summary

    ERIC Educational Resources Information Center

    Merrell, Christine; Kasim, Adetayo

    2015-01-01

    Butterfly Phonics aims to improve the reading of struggling pupils through phonics instruction and a formal teaching style where pupils sit at desks in rows facing the teacher. It is based on a course book created by Irina Tyk, and was delivered in this evaluation by Real Action, a charity based in London. Real Action staff recruited and trained…

  9. Maximum likelihood estimation for Cox's regression model under nested case-control sampling.

    PubMed

    Scheike, Thomas H; Juul, Anders

    2004-04-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.

  10. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  11. DSN telemetry system performance with convolutionally coded data using operational maximum-likelihood convolutional decoders

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.

    1976-01-01

    The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.

  12. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  13. The Construct Validity of Higher Order Structure-of-Intellect Abilities in a Battery of Tests Emphasizing the Product of Transformations: A Confirmatory Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; And Others

    1982-01-01

    A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)

  14. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  15. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  16. On non-parametric maximum likelihood estimation of the bivariate survivor function.

    PubMed

    Prentice, R L

    The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.

  17. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  18. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. Wemore » find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.« less

  19. Lod scores for gene mapping in the presence of marker map uncertainty.

    PubMed

    Stringham, H M; Boehnke, M

    2001-07-01

    Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.

  20. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  1. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  2. Does on-water resisted rowing increase or maintain lower-body strength?

    PubMed

    Lawton, Trent W; Cronin, John B; McGuigan, Michael R

    2013-07-01

    Over the past 30 years, endurance volumes have increased by >20% among the rowing elite; therefore, informed decisions about the value of weight training over other possible activities in periodized training plans for rowing need to be made. The purpose of this study was to quantify the changes in lower-body strength development after two 14-week phases of intensive resisted on-water rowing, either incorporating weight training or rowing alone. Ten elite women performed 2 resisted rowing ("towing ropes," e.g., 8 × 3 minutes) plus 6 endurance (e.g., 16-28 km at 70-80% maximum heart rate) and 2 rate-regulated races (e.g., 8,000 m at 24 strokes per minute) on-water each week. After a 4-week washout phase, the 14-week phase was repeated with the addition of 2 weight-training sessions (e.g., 3-4 sets × 6-15 reps). Percent (±SD) and standardized differences in effects (effect size [ES] ± 90% confidence limit) for 5-repetition leg pressing and isometric pulling strength were calculated from data ratio scaled for body mass, log transformed and adjusted for pretest scores. Resisted rowing alone did not increase leg pressing (-1.0 ± 5.3%, p = 0.51) or isometric pulling (5.3 ± 13.4%, p = 0.28) strength. In contrast, after weight training, a moderately greater increase in leg pressing strength was observed (ES = 0.72 ± 0.49, p = 0.03), although differences in isometric pulling strength were unclear (ES = 0.56 ± 1.69, p = 0.52). In conclusion, intensive on-water training including resisted rowing maintained but did not increase lower-body strength. Elite rowers or coaches might consider the incorporation of high-intensity nonfatiguing weight training concurrent to endurance exercise if increases in lower-body strength without changes in body mass are desired.

  3. Multimodal high-intensity interval training increases muscle function and metabolic performance in females.

    PubMed

    Buckley, Stephanie; Knapp, Kelly; Lackie, Amy; Lewry, Colin; Horvey, Karla; Benko, Chad; Trinh, Jason; Butcher, Scotty

    2015-11-01

    High-intensity interval training (HIIT) is a time-efficient method of improving aerobic and anaerobic power and capacity. In most individuals, however, HIIT using modalities such as cycling, running, and rowing does not typically result in increased muscle strength, power, or endurance. The purpose of this study is to compare the physiological outcomes of traditional HIIT using rowing (Row-HIIT) with a novel multimodal HIIT (MM-HIIT) circuit incorporating multiple modalities, including strength exercises, within an interval. Twenty-eight recreationally active women (age 24.7 ± 5.4 years) completed 6 weeks of either Row-HIIT or MM-HIIT and were tested on multiple fitness parameters. MM-HIIT and Row-HIIT resulted in similar improvements (p < 0.05 for post hoc pre- vs. post-training increases for each group) in maximal aerobic power (7% vs. 5%), anaerobic threshold (13% vs. 12%), respiratory compensation threshold (7% vs. 5%), anaerobic power (15% vs. 12%), and anaerobic capacity (18% vs. 14%). The MM-HIIT group had significant (p < 0.01 for all) increases in squat (39%), press (27%), and deadlift (18%) strength, broad jump distance (6%), and squat endurance (280%), whereas the Row-HIIT group had no increase in any muscle performance variable (p values 0.33-0.90). Post-training, 1-repetition maximum (1RM) squat (64.2 ± 13.6 vs. 45.8 ± 16.2 kg, p = 0.02), 1RM press (33.2 ± 3.8 vs. 26.0 ± 9.6 kg, p = 0.01), and squat endurance (23.9 ± 12.3 vs. 10.2 ± 5.6 reps, p < 0.01) were greater in the MM-HIIT group than in the Row-HIIT group. MM-HIIT resulted in similar aerobic and anaerobic adaptations but greater muscle performance increases than Row-HIIT in recreationally active women.

  4. Medial versus lateral supraspinatus tendon properties: implications for double-row rotator cuff repair.

    PubMed

    Wang, Vincent M; Wang, Fan Chia; McNickle, Allison G; Friel, Nicole A; Yanke, Adam B; Chubinskaya, Susan; Romeo, Anthony A; Verma, Nikhil N; Cole, Brian J

    2010-12-01

    Rotator cuff repair retear rates range from 25% to 90%, necessitating methods to improve repair strength. Although numerous laboratory studies have compared single-row with double-row fixation properties, little is known regarding regional (ie, medial vs lateral) suture retention properties in intact and torn tendons. A torn supraspinatus tendon will have reduced suture retention properties on the lateral aspect of the tendon compared with the more medial musculotendinous junction. Controlled laboratory study. Human supraspinatus tendons (torn and intact) were randomly assigned for suture retention mechanical testing, ultrastructural collagen fibril analysis, or histologic testing after suture pullout testing. For biomechanical evaluation, sutures were placed either at the musculotendinous junction (medial) or 10 mm from the free margin (lateral), and tendons were elongated to failure. Collagen fibril assessments were performed using transmission electron microscopy. Intact tendons showed no regional differences with respect to suture retention properties. In contrast, among torn tendons, the medial region exhibited significantly higher stiffness and work values relative to the lateral region. For the lateral region, work to 10-mm displacement (1592 ± 261 N-mm) and maximum load (265 ± 44 N) for intact tendons were significantly higher (P < .05) than that of torn tendons (1086 ± 388 N-mm and 177 ± 71 N, respectively). For medial suture placement, maximum load, stiffness, and work of intact and torn tendons were similar (P > .05). Regression analyses for the intact and torn groups revealed generally low correlations between donor age and the 3 biomechanical indices. For both intact and torn tendons, the mean fibril diameter and area density were greater in the medial region relative to the lateral (P ≤ .05). In the lateral tendon, but not the medial region, torn specimens showed a significantly lower fibril area fraction (48.3% ± 3.8%) than intact specimens (56.7% ± 3.6%, P < .05). Superior pullout resistance of medially placed sutures may provide a strain shielding effect for the lateral row after double-row repair. Larger diameter collagen fibrils as well as greater fibril area fraction in the medial supraspinatus tendon may provide greater resistance to suture migration. While clinical factors such as musculotendinous integrity warrant strong consideration for surgical decision making, the present ultrastructural and biomechanical results appear to provide a scientific rationale for double-row rotator cuff repair where sutures are placed more medially at the muscle-tendon junction.

  5. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  6. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.

    1981-01-01

    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  7. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  8. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach.

    PubMed

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.

  9. Does extensive on-water rowing increase muscular strength and endurance?

    PubMed

    Lawton, Trent W; Cronin, John B; McGuigan, Mike R

    2012-01-01

    The purpose of this study was to compare changes in aerobic condition, strength, and muscular endurance following 8 weeks of endurance rowing alone or in combination with weight-training. Twenty-two elite rowers were assigned to (1) rowing (n = 10, 250-270 km · week⁻¹) or (2) rowing (n = 12, 190-210 km · week⁻¹) plus four weight-training sessions each week. Pre and post mean and standardized effect-size (ES) differences in aerobic condition (watts at 4 mmol · L⁻¹) and strength (isometric pull, N), prone bench-pull (6-repetition maximum, 6-RM), 5- and 30-repetition leg-press and 60-repetition seated-arm-pull (J, performed on a dynamometer) normalized by body mass and log-transformed were analysed, after adjusting for gender. The standardized differences between groups were trivial for aerobic condition (ES [±90% CI] = 0.15; ±0.28, P = 0.37) and prone bench-pull (ES = 0.27; ±0.33, P = 0.18), although a moderate positive benefit in favour of rowing only was observed for the seated-arm-pull (ES = 0.42; ±0.4, P = 0.08). Only the weight-training group improved isometric pull (12.4 ± 8.9%, P < 0.01), 5-repetition (4.0 ± 5.7%, P < 0.01) and 30-repetition (2.4 ± 5.4%, P < 0.01) leg-press. In conclusion, while gains in aerobic condition and upper-body strength were comparable to extensive endurance rowing, weight-training led to moderately greater lower-body muscular-endurance and strength gains.

  10. Azinphos-methyl residues in apples and spatial distribution of fluorescein in vase-shaped apple trees.

    PubMed

    Bélanger, A; Bostanian, N J; Boivin, G; Boudreau, F

    1991-06-01

    Vase-shaped standard apple trees cv. McIntosh were sprayed with azinphos-methyl at pink, pink and 1st cover and 1st cover only. Residue analyses by gas chromatography revealed detectable residues on foliage until mid summer. At harvest, negligible residue levels were found on the peel and the whole apple. On four trees, fluorescein was sprayed in the same manner as the insecticide and maximum levels of the dye were detected on the outside lower canopy along the row. Minimal concentration of fluorescein was detected on the inner upper canopy away from the direction of the row.

  11. Turbine blade unsteady aerodynamic loading and heat transfer

    NASA Astrophysics Data System (ADS)

    Johnston, David Alan

    Stator indexing to minimize the unsteady aerodynamic loading of closely spaced airfoil rows in turbomachinery is a new technique for the passive control of flow-induced vibrations. This technique, along with the effects of steady blade loading, were studied by means of experiments performed in a two-stage low-speed research turbine. With the second vane row fixed, the inlet vane row was indexed to six positions over one vane-pitch cycle for a range of stage loadings. The aerodynamic forcing function to the first-stage rotor was measured in the rotating reference frame, with the resulting rotor blade unsteady aerodynamic response quantified by rotor blades instrumented with dynamic pressure transducers. Reductions in the unsteady lift magnitude were achieved at all turbine operating conditions, with attenuation ranging from 37% to 74% of the maximum unsteady lift. Additionally, in complementary experiments, the effects of stator indexing and steady blade loading on the unsteady heat transfer of the first- and second-stage rotors was studied for the design and highest blade loading conditions using platinum-film heat gages. The attenuation of unsteady heat transfer coefficient was blade-loading dependent and location dependent along the chord and span, ranging 10% to 90% of maximum. Due to the high degree of location dependence of attenuation, stator indexing is therefore best suited to minimize unsteady heat transfer in local hot spots of the blade rather than the blade as a whole.

  12. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  13. Statistical Bias in Maximum Likelihood Estimators of Item Parameters.

    DTIC Science & Technology

    1982-04-01

    34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC

  14. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    ERIC Educational Resources Information Center

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  15. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  16. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2015-03-03

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes multiple operand names, each operand corresponding to one or more fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first operands with the columns shelf and to associate one or more second operands with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first operands, and each pane has a y-axis defined based on data for the one or more second operands.

  17. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2015-11-10

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields.

  18. The Importance of Motivation in the Typewriting Classroom

    ERIC Educational Resources Information Center

    Jacks, Mary L.

    1976-01-01

    If a typing teacher makes maximum use of intrinsic rewards, it will not be necessary to use many extrinsic motivational devices. The implications of Maslow's "hierarchy of needs" for teachers of adolescents, and the basic motivational principles developed by Rowe are presented. (Author/AJ)

  19. Idle efficiency and pollution results for two-row swirl-can combustors having 72 modules

    NASA Technical Reports Server (NTRS)

    Biaglow, J. A.; Trout, A. M.

    1975-01-01

    Two 72-swirl-can-module combustors were investigated in a full annular combustor test facility at engine idle conditions typical of a 30:1 pressure-ratio engine. The effects of radial and circumferential fuel scheduling on combustion efficiency and gaseous pollutants levels were determined. Test conditions were inlet-air temperature, 452 K; inlet total pressure, 34.45 newtons per square centimeter; and reference velocity, 19.5 meters per second. A maximum combustion efficiency of 98.1 percent was achieved by radial scheduling of fuel to the inner row of swirl-can modules. Emission index values were 6.9 for unburned hydrocarbons and 50.6 for carbon monoxide at a fuel-air ratio of 0.0119. Circumferential fuel scheduling of two 90 degree sectors of the swirl-can arrays produced a maximum combustion efficiency of 97.3 percent. The emission index values were 12.0 for unburned hydrocarbons and 69.2 for carbon monoxide at a fuel-air ratio of 0.0130.

  20. Composite Partial Likelihood Estimation Under Length-Biased Sampling, With Application to a Prevalent Cohort Study of Dementia

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing

    2013-01-01

    The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265

  1. Reliability and effect of sodium bicarbonate: buffering and 2000-m rowing performance.

    PubMed

    Carr, Amelia J; Slater, Gary J; Gore, Christopher J; Dawson, Brian; Burke, Louise M

    2012-06-01

    The aim of this study was to determine the effect and reliability of acute and chronic sodium bicarbonate ingestion for 2000-m rowing ergometer performance (watts) and blood bicarbonate concentration [HCO3-]. In a crossover study, 7 well-trained rowers performed paired 2000-m rowing ergometer trials under 3 double-blinded conditions: (1) 0.3 grams per kilogram of body mass (g/kg BM) acute bicarbonate; (2) 0.5 g/kg BM daily chronic bicarbonate for 3 d; and (3) calcium carbonate placebo, in semi-counterbalanced order. For 2000-m performance and [HCO3-], we examined differences in effects between conditions via pairwise comparisons, with differences interpreted in relation to the likelihood of exceeding smallest worthwhile change thresholds for each variable. We also calculated the within-subject variation (percent typical error). There were only trivial differences in 2000-m performance between placebo (277 ± 60 W), acute bicarbonate (280 ± 65 W) and chronic bicarbonate (282 ± 65 W); however, [HCO3-] was substantially greater after acute bicarbonate, than with chronic loading and placebo. Typical error for 2000-m mean power was 2.1% (90% confidence interval 1.4 to 4.0%) for acute bicarbonate, 3.6% (2.5 to 7.0%) for chronic bicarbonate, and 1.6% (1.1 to 3.0%) for placebo. Postsupplementation [HCO3-] typical error was 7.3% (5.0 to 14.5%) for acute bicarbonate, 2.9% (2.0 to 5.7%) for chronic bicarbonate and 6.0% (1.4 to 11.9%) for placebo. Performance in 2000-m rowing ergometer trials may not substantially improve after acute or chronic bicarbonate loading. However, performances will be reliable with both acute and chronic bicarbonate loading protocols.

  2. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  3. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  4. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  5. Prioritizing conservation investments for mammal species globally

    PubMed Central

    Wilson, Kerrie A.; Evans, Megan C.; Di Marco, Moreno; Green, David C.; Boitani, Luigi; Possingham, Hugh P.; Chiozza, Federica; Rondinini, Carlo

    2011-01-01

    We need to set priorities for conservation because we cannot do everything, everywhere, at the same time. We determined priority areas for investment in threat abatement actions, in both a cost-effective and spatially and temporally explicit way, for the threatened mammals of the world. Our analysis presents the first fine-resolution prioritization analysis for mammals at a global scale that accounts for the risk of habitat loss, the actions required to abate this risk, the costs of these actions and the likelihood of investment success. We evaluated the likelihood of success of investments using information on the past frequency and duration of legislative effectiveness at a country scale. The establishment of new protected areas was the action receiving the greatest investment, while restoration was never chosen. The resolution of the analysis and the incorporation of likelihood of success made little difference to this result, but affected the spatial location of these investments. PMID:21844046

  6. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    PubMed

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  7. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  8. Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key

    PubMed Central

    Batchelder, William H.

    2014-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812

  9. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  10. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  11. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  12. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  13. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  14. Probabilistic Reasoning for Robustness in Automated Planning

    NASA Technical Reports Server (NTRS)

    Schaffer, Steven; Clement, Bradley; Chien, Steve

    2007-01-01

    A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.

  15. The role of self-regulatory efficacy, moral disengagement and guilt on doping likelihood: A social cognitive theory perspective.

    PubMed

    Ring, Christopher; Kavussanu, Maria

    2018-03-01

    Given the concern over doping in sport, researchers have begun to explore the role played by self-regulatory processes in the decision whether to use banned performance-enhancing substances. Grounded on Bandura's (1991) theory of moral thought and action, this study examined the role of self-regulatory efficacy, moral disengagement and anticipated guilt on the likelihood to use a banned substance among college athletes. Doping self-regulatory efficacy was associated with doping likelihood both directly (b = -.16, P < .001) and indirectly (b = -.29, P < .001) through doping moral disengagement. Moral disengagement also contributed directly to higher doping likelihood and lower anticipated guilt about doping, which was associated with higher doping likelihood. Overall, the present findings provide evidence to support a model of doping based on Bandura's social cognitive theory of moral thought and action, in which self-regulatory efficacy influences the likelihood to use banned performance-enhancing substances both directly and indirectly via moral disengagement.

  16. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  17. Essays in the California electricity reserves markets

    NASA Astrophysics Data System (ADS)

    Metaxoglou, Konstantinos

    This dissertation examines inefficiencies in the California electricity reserves markets. In Chapter 1, I use the information released during the investigation of the state's electricity crisis of 2000 and 2001 by the Federal Energy Regulatory Commission to diagnose allocative inefficiencies. Building upon the work of Wolak (2000), I calculate a lower bound for the sellers' price-cost margins using the inverse elasticities of their residual demand curves. The downward bias in my estimates stems from the fact that I don't account for the hierarchical substitutability of the reserve types. The margins averaged at least 20 percent for the two highest quality types of reserves, regulation and spinning, generating millions of dollars in transfers to a handful of sellers. I provide evidence that the deviations from marginal cost pricing were due to the markets' high concentration and a principal-agent relationship that emerged from their design. In Chapter 2, I document systematic differences between the markets' day- and hour-ahead prices. I use a high-dimensional vector moving average model to estimate the premia and conduct correct inferences. To obtain exact maximum likelihood estimates of the model, I employ the EM algorithm that I develop in Chapter 3. I uncover significant day-ahead premia, which I attribute to market design characteristics too. On the demand side, the market design established a principal-agent relationship between the markets' buyers (principal) and their supervisory authority (agent). The agent had very limited incentives to shift reserve purchases to the lower priced hour-ahead markets. On the supply side, the market design raised substantial entry barriers by precluding purely speculative trading and by introducing a complicated code of conduct that induced uncertainty about which actions were subject to regulatory scrutiny. In Chapter 3, I introduce a state-space representation for vector autoregressive moving average models that enables exact maximum likelihood estimation using the EM algorithm. Moreover, my algorithm uses only analytical expressions; it requires the Kalman filter and a fixed-interval smoother in the E step and least squares-type regression in the M step. In contrast, existing maximum likelihood estimation methods require numerical differentiation, both for univariate and multivariate models.

  18. The effect of inspiratory and expiratory respiratory muscle training in rowers.

    PubMed

    Forbes, S; Game, A; Syrotuik, D; Jones, R; Bell, G J

    2011-10-01

    This study examined inspiratory and expiratory resistive loading combined with strength and endurance training on pulmonary function and rowing performance. Twenty-one male (n = 9) and female (n = 12) rowers were matched on 2000 m simulated rowing race time and gender and randomly assigned to two groups. The experimental group trained respiratory muscles using a device that provided both an inspiratory and expiratory resistance while the control group used a SHAM device. Respiratory muscle training (RMT) or SHAM was performed 6 d/wk concurrent with strength (3 d/wk) and endurance (3 d/wk) training on alternate days for 10 weeks. Respiratory muscle training (RMT) enhanced maximum inspiratory (PI(max)) and expiratory (PE(max)) strength at rest and during recovery from exercise (P < 0.05). Both groups showed improvements in peak VO2, strength, and 2000 m performance time (P < 0.05). It was concluded that RMT is effective for improving respiratory strength but did not facilitate greater improvements to simulated 2000 m rowing performance.

  19. Inspiratory and expiratory respiratory muscle training as an adjunct to concurrent strength and endurance training provides no additional 2000 m performance benefits to rowers.

    PubMed

    Bell, Gordon J; Game, Alex; Jones, Richard; Webster, Travis; Forbes, Scott C; Syrotuik, Dan

    2013-01-01

    The purpose of this study was to examine respiratory muscle training (RMT) combined with 9 weeks of resistance and endurance training on rowing performance and cardiopulmonary responses. Twenty-seven rowers (mean ± SD: age = 27 ± 9 years; height = 176.9 ± 10.8 cm; and body mass = 76.1 ± 12.6 kg) were randomly assigned to an inspiratory only (n = 13) or expiratory only (n = 14) training group. Both RMT programs were 3 sets of 10 reps, 6 d/wk in addition to an identical 3 d/wk resistance and 3 d/wk endurance training program. Both groups showed similar improvements in 2000 m rowing performance, cardiorespiratory fitness, strength, and maximum inspiratory (PImax) and expiratory (PEmax) pressures (p < .05). It was concluded that there were no additional benefits of 9 weeks of inspiratory or expiratory RMT on simulated 2000 m rowing performance or cardiopulmonary responses when combined with resistance and endurance training in rowers.

  20. Thought-action fusion and its relationship to schizotypy and OCD symptoms.

    PubMed

    Lee, Han-Joo; Cougle, Jesse R; Telch, Michael J

    2005-01-01

    Thought-action fusion (TAF) is a cognitive bias that has been linked to obsessive-compulsive disorder (OCD). Preliminary evidence suggests schizotypal traits may be associated with some types of OCD obsessions but not others. We examined the relationship between each of the two major types of TAF (i.e., likelihood and moral), schizotypal traits, and OCD symptoms in 969 nonclinical undergraduate students. We hypothesized that likelihood TAF would be associated with schizotypal traits; whereas moral TAF would not. Consistent with prediction, schizotypal-magical thinking was significantly associated with likelihood TAF even after controlling for the effects of OCD symptoms, general anxiety, and depression. Moreover, the relationship between likelihood TAF and OCD symptoms was significantly attenuated after controlling for schizotypal traits. In contrast, moral TAF demonstrated negligible association with OCD symptoms, depression, or schizotypal traits. These findings provide preliminary support for the linkage between likelihood TAF and schizotypal traits.

  1. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  2. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    NASA Technical Reports Server (NTRS)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  3. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  4. Polarization signatures for abandoned agricultural fields in the Manix Basin area of the Mojave Desert - Can polarimetric SAR detect desertification?

    NASA Technical Reports Server (NTRS)

    Ray, Terrill W.; Farr, Tom G.; Van Zyl, Jakob J.

    1992-01-01

    Radar backscatter from abandoned circular alfalfa fields in the Manix Basin area of the Mojave desert shows systematic changes with length of abandonment. The obliteration of circular planting rows by surface processes could account for the disappearance of bright spokes, which seem to be reflection patterns from remnants of the planting rows, with increasing length of abandonment. An observed shift in the location of the maximum L-band copolarization return away from VV, as well as an increase in surface roughness, both occurring with increasing age of abandonment, seems to be attributable to the formation of wind ripples on the relatively vegetationless fields.

  5. CREW PORTRAIT - SPACE SHUTTLE MISSION 41B

    NASA Image and Video Library

    1983-01-01

    S83-40555 (15 October 1983) --- These five astronauts are in training for the STS-41B mission, scheduled early next year. On the front row are Vance D. Brand, commander; and Robert L. Gibson, pilot. Mission specialists (back row, left to right) are Robert L. Stewart, Dr. Ronald E. McNair and Bruce McCandless II. Stewart and McCandless are wearing Extravehicular Mobility Units (EMU) space suits. The STS program's second extravehicular activity (EVA) is to be performed on this flight, largely as a rehearsal for a scheduled repair visit to the Solar Maximum Satellite (SMS), on a later mission. The Manned Maneuvering Unit (MMU) will make its space debut on STS-41B.

  6. Probabilistic models in human sensorimotor control

    PubMed Central

    Wolpert, Daniel M.

    2009-01-01

    Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: Bayesian statistics and decision theory. Here we review Bayesian statistics and show how it applies to estimating the state of the world and our own body. Recent results suggest that when learning novel tasks we are able to learn the statistical properties of both the world and our own sensory apparatus so as to perform estimation using Bayesian statistics. We review studies which suggest that humans can combine multiple sources of information to form maximum likelihood estimates, can incorporate prior beliefs about possible states of the world so as to generate maximum a posteriori estimates and can use Kalman filter-based processes to estimate time-varying states. Finally, we review Bayesian decision theory in motor control and how the central nervous system processes errors to determine loss functions and optimal actions. We review results that suggest we plan movements based on statistics of our actions that result from signal-dependent noise on our motor outputs. Taken together these studies provide a statistical framework for how the motor system performs in the presence of uncertainty. PMID:17628731

  7. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  8. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

    PubMed Central

    Meyer, Karin

    2016-01-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681

  9. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  10. The Effects of Multiple-Joint Isokinetic Resistance Training on Maximal Isokinetic and Dynamic Muscle Strength and Local Muscular Endurance.

    PubMed

    Ratamess, Nicholas A; Beller, Noah A; Gonzalez, Adam M; Spatz, Gregory E; Hoffman, Jay R; Ross, Ryan E; Faigenbaum, Avery D; Kang, Jie

    2016-03-01

    The transfer of training effects of multiple-joint isokinetic resistance training to dynamic exercise performance remain poorly understood. Thus, the purpose of the present study was to investigate the magnitude of isokinetic and dynamic one repetition-maximum (1RM) strength and local muscular endurance increases after 6 weeks of multiple-joint isokinetic resistance training. Seventeen women were randomly assigned to either an isokinetic resistance training group (IRT) or a non-exercising control group (CTL). The IRT group underwent 6 weeks of training (2 days per week) consisting of 5 sets of 6-10 repetitions at 75-85% of subjects' peak strength for the isokinetic chest press and seated row exercises at an average linear velocity of 0.15 m s(-1) [3-sec concentric (CON) and 3-sec eccentric (ECC) phases]. Peak CON and ECC force during the chest press and row, 1RM bench press and bent-over row, and maximum number of modified push-ups were assessed pre and post training. A 2 x 2 analysis of variance with repeated measures and Tukey's post hoc tests were used for data analysis. The results showed that 1RM bench press (from 38.6 ± 6.7 to 43.0 ± 5.9 kg), 1RM bent-over row (from 40.4 ± 7.7 to 45.5 ± 7.5 kg), and the maximal number of modified push-ups (from 39.5 ± 13.6 to 55.3 ± 13.1 repetitions) increased significantly only in the IRT group. Peak isokinetic CON and ECC force in the chest press and row significantly increased in the IRT group. No differences were shown in the CTL group for any measure. These data indicate 6 weeks of multiple-joint isokinetic resistance training increases dynamic muscle strength and local muscular endurance performance in addition to specific isokinetic strength gains in women. Key pointsMultiple-joint isokinetic resistance training increases dynamic maximal muscular strength, local muscular endurance, and maximal isokinetic strength in women.Multiple-joint isokinetic resistance training increased 1RM strength in the bench press (by 10.2%), bent-over barbell row (by 11.2%), and maximal modified push-up performance (by 28.6%) indicating a carryover of training effects to dynamic exercise performance.The carryover effects may be attractive to strength training and conditioning professionals seeking to include alternative modalities such as multiple-joint isokinetic dynamometers to resistance training programs.

  11. The Effects of Multiple-Joint Isokinetic Resistance Training on Maximal Isokinetic and Dynamic Muscle Strength and Local Muscular Endurance

    PubMed Central

    Ratamess, Nicholas A.; Beller, Noah A.; Gonzalez, Adam M.; Spatz, Gregory E.; Hoffman, Jay R.; Ross, Ryan E.; Faigenbaum, Avery D.; Kang, Jie

    2016-01-01

    The transfer of training effects of multiple-joint isokinetic resistance training to dynamic exercise performance remain poorly understood. Thus, the purpose of the present study was to investigate the magnitude of isokinetic and dynamic one repetition-maximum (1RM) strength and local muscular endurance increases after 6 weeks of multiple-joint isokinetic resistance training. Seventeen women were randomly assigned to either an isokinetic resistance training group (IRT) or a non-exercising control group (CTL). The IRT group underwent 6 weeks of training (2 days per week) consisting of 5 sets of 6-10 repetitions at 75-85% of subjects’ peak strength for the isokinetic chest press and seated row exercises at an average linear velocity of 0.15 m s-1 [3-sec concentric (CON) and 3-sec eccentric (ECC) phases]. Peak CON and ECC force during the chest press and row, 1RM bench press and bent-over row, and maximum number of modified push-ups were assessed pre and post training. A 2 x 2 analysis of variance with repeated measures and Tukey’s post hoc tests were used for data analysis. The results showed that 1RM bench press (from 38.6 ± 6.7 to 43.0 ± 5.9 kg), 1RM bent-over row (from 40.4 ± 7.7 to 45.5 ± 7.5 kg), and the maximal number of modified push-ups (from 39.5 ± 13.6 to 55.3 ± 13.1 repetitions) increased significantly only in the IRT group. Peak isokinetic CON and ECC force in the chest press and row significantly increased in the IRT group. No differences were shown in the CTL group for any measure. These data indicate 6 weeks of multiple-joint isokinetic resistance training increases dynamic muscle strength and local muscular endurance performance in addition to specific isokinetic strength gains in women. Key points Multiple-joint isokinetic resistance training increases dynamic maximal muscular strength, local muscular endurance, and maximal isokinetic strength in women. Multiple-joint isokinetic resistance training increased 1RM strength in the bench press (by 10.2%), bent-over barbell row (by 11.2%), and maximal modified push-up performance (by 28.6%) indicating a carryover of training effects to dynamic exercise performance. The carryover effects may be attractive to strength training and conditioning professionals seeking to include alternative modalities such as multiple-joint isokinetic dynamometers to resistance training programs. PMID:26957924

  12. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  13. A new approach to quantifying physical demand in rugby union.

    PubMed

    Lacome, Mathieu; Piscione, Julien; Hager, Jean-Philippe; Bourdin, Muriel

    2014-01-01

    The objective of the study was to describe an original approach to assessing individual workload during international rugby union competitions. The difference between positional groups and between the two halves was explored. Sixty-seven files from 30 French international rugby union players were assessed on a computerised player-tracking system (Amisco Pro(®), Sport Universal Process, Nice, France) during five international games. Each player's action was split up into exercise and recovery periods according to his individual velocity threshold. Exercise-to-recovery (E:R) period ratios and acceleration were calculated. Results indicated that about 65% of exercise periods lasted less than 4 s; half of the E:Rs were less than 1:4, and about one-third ranged between 1 and 1:4 and about 40% of exercise periods were classified as medium intensity. Most acceleration values were less than 3 m·s(-2) and started from standing or walking activity. Back row players showed the highest mean acceleration values over the game (P < 0.05). No significant decrease in physical performance was seen between the first and second halves of the games except for back rows, who showed a significant decrease in mean acceleration (P < 0.05). The analysis of results emphasised the specific activity of back rows and tended to suggest that the players' combinations of action and recovery times were optimal for preventing large decrease in the physical performance.

  14. Vector Antenna and Maximum Likelihood Imaging for Radio Astronomy

    DTIC Science & Technology

    2016-03-05

    Maximum Likelihood Imaging for Radio Astronomy Mary Knapp1, Frank Robey2, Ryan Volz3, Frank Lind3, Alan Fenn2, Alex Morris2, Mark Silver2, Sarah Klein2...haystack.mit.edu Abstract1— Radio astronomy using frequencies less than ~100 MHz provides a window into non-thermal processes in objects ranging from planets...observational astronomy . Ground-based observatories including LOFAR [1], LWA [2], [3], MWA [4], and the proposed SKA-Low [5], [6] are improving access to

  15. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  16. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  17. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures

    PubMed Central

    Theobald, Douglas L.; Wuttke, Deborah S.

    2008-01-01

    Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907

  18. Two-dimensional Cascade Investigation of the Maximum Exit Tangential Velocity Component and Other Flow Conditions at the Exit of Several Turbine Blade Designs at Supercritical Pressure Ratios

    NASA Technical Reports Server (NTRS)

    Hauser, Cavour H; Plohr, Henry W

    1951-01-01

    The nature of the flow at the exit of a row of turbine blades for the range of conditions represented by four different blade configurations was evaluated by the conservation-of-momentum principle using static-pressure surveys and by analysis of Schlieren photographs of the flow. It was found that for blades of the type investigated, the maximum exit tangential-velocity component is a function of the blade geometry only and can be accurately predicted by the method of characteristics. A maximum value of exit velocity coefficient is obtained at a pressure ratio immediately below that required for maximum blade loading followed by a sharp drop after maximum blade loading occurs.

  19. Maximum Likelihood Analysis in the PEN Experiment

    NASA Astrophysics Data System (ADS)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  20. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  1. A maximum likelihood algorithm for genome mapping of cytogenetic loci from meiotic configuration data.

    PubMed Central

    Reyes-Valdés, M H; Stelly, D M

    1995-01-01

    Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226

  2. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  3. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  5. DECONV-TOOL: An IDL based deconvolution software package

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Landsman, W. B.

    1992-01-01

    There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.

  6. F-8C adaptive flight control laws

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Harvey, C. A.; Stein, G.; Carlson, D. N.; Hendrick, R. C.

    1977-01-01

    Three candidate digital adaptive control laws were designed for NASA's F-8C digital flyby wire aircraft. Each design used the same control laws but adjusted the gains with a different adaptative algorithm. The three adaptive concepts were: high-gain limit cycle, Liapunov-stable model tracking, and maximum likelihood estimation. Sensors were restricted to conventional inertial instruments (rate gyros and accelerometers) without use of air-data measurements. Performance, growth potential, and computer requirements were used as criteria for selecting the most promising of these candidates for further refinement. The maximum likelihood concept was selected primarily because it offers the greatest potential for identifying several aircraft parameters and hence for improved control performance in future aircraft application. In terms of identification and gain adjustment accuracy, the MLE design is slightly superior to the other two, but this has no significant effects on the control performance achievable with the F-8C aircraft. The maximum likelihood design is recommended for flight test, and several refinements to that design are proposed.

  7. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less

  8. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  9. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  10. Inferring Phylogenetic Networks Using PhyloNet.

    PubMed

    Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay

    2018-07-01

    PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.

  11. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    PubMed

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  12. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  13. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  14. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  15. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  16. The Relationship between Public Middle School Teachers' Reports of Their Empathy and Their Reports of Their Likelihood of Intervening in a Bullying Situation: An Action Research Study

    ERIC Educational Resources Information Center

    Singh, Jasdeep

    2013-01-01

    The purpose of this action research was to explore and describe the relationship between middle school teachers' reports of their empathy and their reports of their likelihood of intervening in a bullying situation. Teacher volunteers from a single middle school within a suburban school district in a northeastern state were asked to complete…

  17. 34 CFR 682.704 - Emergency action.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Emergency action. 682.704 Section 682.704 Education... Emergency action. (a) The Secretary, or a designated Departmental official, may take emergency action to...) Determines that immediate action is necessary to prevent the likelihood of substantial losses by the Federal...

  18. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  19. Wavelength calibration of an imaging spectrometer based on Savart interferometer

    NASA Astrophysics Data System (ADS)

    Li, Qiwei; Zhang, Chunmin; Yan, Tingyu; Quan, Naicheng; Wei, Yutong; Tong, Cuncun

    2017-09-01

    The basic principle of Fourier-transform imaging spectrometer (FTIS) based on Savart interferometer is outlined. The un-identical distribution of the optical path difference which leads to the wavelength drift of each row of the interferogram is analyzed. Two typical methods for wavelength calibration of the presented system are described. The first method unifies different spectral intervals and maximum spectral frequencies of each row by a reference monochromatic light with known wavelength, and the dispersion compensation of Savart interferometer is also involved. The second approach is based on the least square fitting which builds the functional relation between recovered wavelength, row number and calibrated wavelength by concise equations. The effectiveness of the two methods is experimentally demonstrated with monochromatic lights and mixed light source across the detecting band of the system, and the results indicate that the first method has higher precision and the mean root-mean-square error of the recovered wavelengths is significantly reduced from 19.896 nm to 1.353 nm, while the second method is more convenient to implement and also has good precision of 2.709 nm.

  20. Pitfalls in 16-detector row CT of the coronary arteries.

    PubMed

    Nakanishi, Tadashi; Kayashima, Yasuyo; Inoue, Rintaro; Sumii, Kotaro; Gomyo, Yukihiko

    2005-01-01

    Recently developed 16-detector row computed tomography (CT) has been introduced as a reliable noninvasive imaging modality for evaluating the coronary arteries. In most cases, with appropriate premedication that includes beta-blockers and nitroglycerin, ideal data sets can be acquired from which to obtain excellent-quality coronary CT angiograms, most often with multiplanar reformation, thin-slab maximum intensity projection, and volume rendering. However, various artifacts associated with data creation and reformation, postprocessing methods, and image interpretation can hamper accurate diagnosis. These artifacts can be related to pulsation (nonassessable segments, pseudostenosis) as well as rhythm disorders, respiratory issues, partial volume averaging effect, high-attenuation entities, inappropriate scan pitch, contrast material enhancement, and patient body habitus. Some artifacts have already been resolved with technical advances, whereas others represent partially inherent limitations of coronary CT angiography. Familiarity with the pitfalls of coronary angiography with 16-detector row CT, coupled with the knowledge of both the normal anatomy and anatomic variants of the coronary arteries, can almost always help radiologists avoid interpretive errors in the diagnosis of coronary artery stenosis. (c) RSNA, 2005.

  1. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  2. A maximum likelihood convolutional decoder model vs experimental data comparison

    NASA Technical Reports Server (NTRS)

    Chen, R. Y.

    1979-01-01

    This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.

  3. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  4. Knotless single-row rotator cuff repair: a comparative biomechanical study of 2 knotless suture anchors.

    PubMed

    Efird, Chad; Traub, Shaun; Baldini, Todd; Rioux-Forker, Dana; Spalazzi, Jeffrey P; Davisson, Twana; Hawkins, Monica; McCarty, Eric

    2013-08-01

    The purpose of this study was to compare the gap formation during cyclic loading, maximum repair strength, and failure mode of single-row full-thickness supraspinatus repairs performed using 2 knotless suture anchors with differing internal suture-retention mechanisms in a human cadaver model. Nine matched pairs of cadaver shoulders were used. Full-thickness tears were induced by detaching the supraspinatus tendon from the greater tuberosity. Single-row repairs were performed with either type I (Opus Magnum PI; ArthroCare, Austin, Texas) or type II (ReelX STT; Stryker, Mahwah, New Jersey) knotless suture anchors. The repaired tendon was cycled from 10 to 90 N for 500 cycles, followed by load to failure. Gap formation was measured at 5, 100, 200, 300, 400, and 500 cycles with a video digitizing system. Anchor type or location (anterior or posterior) had no effect on gap formation during cyclic loading regardless of position (anterior, P=.385; posterior, P=.389). Maximum load to failure was significantly greater (P=.018) for repairs performed with type II anchors (288±62 N) compared with type I anchors (179±39 N). Primary failure modes were anchor pullout and tendon tearing for type II anchors and suture slippage through the anchor for type I anchors. The internal ratcheting suture-retention mechanism of type II anchors may have helped this anchor outperform the suture-cinching mechanism of type I anchors by supporting significantly higher loads before failure and minimizing suture slippage, potentially leading to stronger repairs clinically. Copyright 2013, SLACK Incorporated.

  5. 30 CFR 585.306 - What action will BOEM take on my request?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ....306 Section 585.306 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Rights-of-Way Grants and Rights-of-Use and Easement Grants for Renewable Energy Activities Obtaining Row...

  6. 30 CFR 585.306 - What action will BOEM take on my request?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....306 Section 585.306 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Rights-of-Way Grants and Rights-of-Use and Easement Grants for Renewable Energy Activities Obtaining Row...

  7. 78 FR 34031 - Burned Area Emergency Response, Forest Service

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-06

    ...) Evaluate potential threats to critical values; (2) determine the risk level for each threat; (3) identify... actions that meet the objectives; (6) evaluate potential response actions on likelihood for timely... stabilization actions. Improved the descriptive guidelines for employing response actions involving...

  8. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C [Sante Fe, NM

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  9. Acute muscular strength assessment using free weight bars of different thickness.

    PubMed

    Ratamess, Nicholas A; Faigenbaum, Avery D; Mangine, Gerald T; Hoffman, Jay R; Kang, Jie

    2007-02-01

    The purpose of the present investigation was to examine strength performance of 6 common resistance training exercises using free weight bars of different thickness. Eleven resistance-trained men (8.2 +/- 2.6 years of experience; age: 22.1 +/- 1.6 years; body mass: 90.5 +/- 8.9 kg) underwent 1 repetition maximum (1RM) strength testing on 6 occasions in random order for the deadlift, bent-over row, upright row, bench press, seated shoulder press, and arm curl exercises under 3 conditions using: (a) a standard Olympic bar (OL), (b) a 2-inch thick bar (5.08 cm grip span), and (c) a 3-inch thick bar (7.62 cm grip span). Significant (p < 0.05) interactions were observed for the "pulling" exercises. For the deadlift and bent-over row, highest 1RM values were obtained with OL, followed by the 2- and 3-inch bar. Significant 1RM performance decrements for the 2- and 3-inch bars were approximately 28.3 and 55.0%, respectively, for the deadlift; decrements for the 2- and 3-inch bars were approximately 8.9 and 37.3%, respectively, for the bent-over row. For the upright row and arm curl, similar 1RMs were obtained for OL and the 2-inch bar. However, a significant performance reduction was observed using the 3-inch bar (approximately 26.1% for the upright row and 17.6% for the arm curl). The reductions in 1RM loads correlated significantly to hand size and maximal isometric grip strength (r = -0.55 to -0.73). No differences were observed between bars for the bench press or shoulder press. In conclusion, the use of 2- and 3-inch thick bars may result in initial weight reductions primarily for pulling exercises presumably due to greater reliance on maximal grip strength and larger hand size.

  10. Thought-action fusion in individuals with OCD symptoms.

    PubMed

    Amir, N; Freshman, M; Ramsey, B; Neary, E; Brigidi, B

    2001-07-01

    Rachman (Rachman, S. (1993). Obsessions, responsibility, and guilt. Behaviour Research and Therapy, 31, 149-154) suggested that patients with OCD may interpret thoughts as having special importance, thus experiencing thought-action fusion (TAF). Shafran, Thordarson and Rachman (Shafran, R., Thordarson, D. S. & Rachman, S. (1996). Thought-action fusion in obsessive compulsive disorder. Journal of Anxiety Disorders, 710, 379-391) developed a questionnaire (TAF) and found that obsessives scored higher than non-obsessives on the measure. In the current study, we modified the TAF to include a scale that assessed the "likelihood of events happening to others" as well as ratings of the responsibility and cost for having these thoughts. Replicating previous findings, we found that individuals with OC symptoms gave higher ratings to the likelihood of negative events happening as a result of their negative thoughts. Individuals with OC symptoms also rated the likelihood that they would prevent harm by their positive thoughts higher than did individuals without OC symptoms. These results suggest that the role of thought-action fusion in OCs may extend to exaggerated beliefs about thoughts regarding the reduction of harm.

  11. Phylogenetic place of guinea pigs: no support of the rodent-polyphyly hypothesis from maximum-likelihood analyses of multiple protein sequences.

    PubMed

    Cao, Y; Adachi, J; Yano, T; Hasegawa, M

    1994-07-01

    Graur et al.'s (1991) hypothesis that the guinea pig-like rodents have an evolutionary origin within mammals that is separate from that of other rodents (the rodent-polyphyly hypothesis) was reexamined by the maximum-likelihood method for protein phylogeny, as well as by the maximum-parsimony and neighbor-joining methods. The overall evidence does not support Graur et al.'s hypothesis, which radically contradicts the traditional view of rodent monophyly. This work demonstrates that we must be careful in choosing a proper method for phylogenetic inference and that an argument based on a small data set (with respect to the length of the sequence and especially the number of species) may be unstable.

  12. How much to trust the senses: Likelihood learning

    PubMed Central

    Sato, Yoshiyuki; Kording, Konrad P.

    2014-01-01

    Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975

  13. Task Performance with List-Mode Data

    NASA Astrophysics Data System (ADS)

    Caucci, Luca

    This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.

  14. Improved relocatable over-the-horizon radar detection and tracking using the maximum likelihood adaptive neural system algorithm

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.

    1998-07-01

    An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.

  15. Psychometric properties of revised Thought-Action Fusion questionnaire (TAF-R) in an Iranian population.

    PubMed

    Pourfaraj, Majid; Mohammadi, Nourallah; Taghavi, Mohammadreza

    2008-12-01

    The purpose of this study is to examine the psychometric properties of Thought-Action Fusion revised scale (TAF-R; Amir, N., freshman, M., Ramsey, B., Neary, E., & Brigidi, B. (2001). Thought-action fusion in individuals with OCD symptoms. Behaviour Research and Therapy, 39, 765-776) in a sample of 565 (321 female) students of Shiraz university. The results of factor analysis with using varimax rotation yielded eight factors that explained 80% variances of total scale. These factors are labeled: moral TAF, responsibility for positive thoughts, likelihood negative events, likelihood positive events, responsibility for negative thoughts, responsibility for harm avoidance, likelihood harm avoidance and likelihood self, respectively. The reliability coefficients of total scale are calculated by two methods: internal consistency and test-retest, which were 0.81 and 0.61, respectively. Concurrent validity showed that TAF-R scores positively and significantly correlate with responsibility, guilt and obsessive-compulsive symptoms. Confirming the expectations, there were people with high obsessive-compulsive symptoms having higher TAF-R scores than those with low symptoms. Moreover, subscales-total correlations showed that the correlations between subscales were low, but subscales correlating with total score of TAF-R were moderated.

  16. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  17. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    PubMed

    Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  18. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  19. Land cover mapping after the tsunami event over Nanggroe Aceh Darussalam (NAD) province, Indonesia

    NASA Astrophysics Data System (ADS)

    Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Alias, A. N.; Mohd. Saleh, N.; Wong, C. J.; Surbakti, M. S.

    2008-03-01

    Remote sensing offers an important means of detecting and analyzing temporal changes occurring in our landscape. This research used remote sensing to quantify land use/land cover changes at the Nanggroe Aceh Darussalam (Nad) province, Indonesia on a regional scale. The objective of this paper is to assess the changed produced from the analysis of Landsat TM data. A Landsat TM image was used to develop land cover classification map for the 27 March 2005. Four supervised classifications techniques (Maximum Likelihood, Minimum Distance-to- Mean, Parallelepiped and Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier) were performed to the satellite image. Training sites and accuracy assessment were needed for supervised classification techniques. The training sites were established using polygons based on the colour image. High detection accuracy (>80%) and overall Kappa (>0.80) were achieved by the Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier in this study. This preliminary study has produced a promising result. This indicates that land cover mapping can be carried out using remote sensing classification method of the satellite digital imagery.

  20. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    PubMed

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  1. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis

    PubMed Central

    van de Schoot, Rens; Hox, Joop

    2014-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827

  2. Testing students’ e-learning via Facebook through Bayesian structural equation modeling

    PubMed Central

    Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019

  3. An integrative approach to understanding the evolution and diversity of Copiapoa (Cactaceae), a threatened endemic Chilean genus from the Atacama Desert.

    PubMed

    Larridon, Isabel; Walter, Helmut E; Guerrero, Pablo C; Duarte, Milén; Cisternas, Mauricio A; Hernández, Carol Peña; Bauters, Kenneth; Asselman, Pieter; Goetghebeur, Paul; Samain, Marie-Stéphanie

    2015-09-01

    Species of the endemic Chilean cactus genus Copiapoa have cylindrical or (sub)globose stems that are solitary or form (large) clusters and typically yellow flowers. Many species are threatened with extinction. Despite being icons of the Atacama Desert and well loved by cactus enthusiasts, the evolution and diversity of Copiapoa has not yet been studied using a molecular approach. Sequence data of three plastid DNA markers (rpl32-trnL, trnH-psbA, ycf1) of 39 Copiapoa taxa were analyzed using maximum likelihood and Bayesian inference approaches. Species distributions were modeled based on geo-referenced localities and climatic data. Evolution of character states of four characters (root morphology, stem branching, stem shape, and stem diameter) as well as ancestral areas were reconstructed using a Bayesian and maximum likelihood framework, respectively. Clades of species are revealed. Though 32 morphologically defined species can be recognized, genetic diversity between some species and infraspecific taxa is too low to delimit their boundaries using plastid DNA markers. Recovered relationships are often supported by morphological and biogeographical patterns. The origin of Copiapoa likely lies between southern Peru and the extreme north of Chile. The Copiapó Valley limited colonization between two biogeographical areas. Copiapoa is here defined to include 32 species and five heterotypic subspecies. Thirty species are classified into four sections and two subsections, while two species remain unplaced. A better understanding of evolution and diversity of Copiapoa will allow allocating conservation resources to the most threatened lineages and focusing conservation action on real biodiversity. © 2015 Botanical Society of America.

  4. Biomechanical comparison of four double-row speed-bridging rotator cuff repair techniques with or without medial or lateral row enhancement.

    PubMed

    Pauly, Stephan; Fiebig, David; Kieser, Bettina; Albrecht, Bjoern; Schill, Alexander; Scheibel, Markus

    2011-12-01

    Biomechanical comparison of four different Speed-Bridge configurations with or without medial or lateral row reinforcement. Reinforcement of the knotless Speed-Bridge double-row repair technique with additional medial mattress- or lateral single-stitches was hypothesized to improve biomechanical repair stability at time zero. Controlled laboratory study: In 36 porcine fresh-frozen shoulders, the infraspinatus tendons were dissected and shoulders were randomized to four groups: (1) Speed-Bridge technique with single tendon perforation per anchor (STP); (2) Speed-Bridge technique with double tendon perforation per anchor (DTP); (3) Speed-Bridge technique with medial mattress-stitch reinforcement (MMS); (4) Speed-Bridge technique with lateral single-stitch reinforcement (LSS). All repairs were cyclically loaded from 10-60 N up to 10-200 N (20 N stepwise increase) using a material testing device. Forces at 3 and 5 mm gap formation, mode of failure and maximum load to failure were recorded. The MMS-technique with double tendon perforation showed significantly higher ultimate tensile strength (338.9 ± 90.0 N) than DTP (228.3 ± 99.9 N), LSS (188.9 ± 62.5 N) and STP-technique (122.2 ± 33.8 N). Furthermore, the MMS-technique provided increased maximal force resistance until 3 and 5 mm gap formation (3 mm: 77.8 ± 18.6 N; 5 mm: 113.3 ± 36.1 N) compared with LSS, DTP and STP (P < 0.05 for each 3 and 5 mm gap formation). Failure mode was medial row defect by tendon sawing first, then laterally. No anchor pullout occurred. Double tendon perforation per anchor and additional medial mattress stitches significantly enhance biomechanical construct stability at time zero in this ex vivo model when compared with the all-knotless Speed-Bridge rotator cuff repair.

  5. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  6. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  7. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  8. Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,

  9. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  10. Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.

    PubMed

    McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J

    2018-04-01

    Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.

  11. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure

    PubMed Central

    Richards, V. M.; Dai, W.

    2014-01-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826

  12. Equalization of nonlinear transmission impairments by maximum-likelihood-sequence estimation in digital coherent receivers.

    PubMed

    Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro

    2010-03-01

    We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.

  13. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  14. The epoch state navigation filter. [for maximum likelihood estimates of position and velocity vectors

    NASA Technical Reports Server (NTRS)

    Battin, R. H.; Croopnick, S. R.; Edwards, J. A.

    1977-01-01

    The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.

  15. A 3D approximate maximum likelihood localization solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-09-23

    A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  16. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  17. Search for Point Sources of Ultra-High-Energy Cosmic Rays above 4.0 × 1019 eV Using a Maximum Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.

    2005-04-01

    We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.

  18. Death Row Lawyers.

    ERIC Educational Resources Information Center

    Conway, Claire

    1989-01-01

    Discusses the issue of providing adequate legal representation for persons condemned to death. Asks if the rights of a poor capital defendant charged with a capital crime in the South enjoys fewer rights than one in Los Angeles and whether this creates a dual system of justice. Suggests that state action for improvement may be slow in coming. (KO)

  19. Adaptive P300 based control system

    PubMed Central

    Jin, Jing; Allison, Brendan Z.; Sellers, Eric W.; Brunner, Clemens; Horki, Petar; Wang, Xingyu; Neuper, Christa

    2015-01-01

    An adaptive P300 brain-computer interface (BCI) using a 12 × 7 matrix explored new paradigms to improve bit rate and accuracy. During online use, the system adaptively selects the number of flashes to average. Five different flash patterns were tested. The 19-flash paradigm represents the typical row/column presentation (i.e., 12 columns and 7 rows). The 9- and 14-flash A & B paradigms present all items of the 12 × 7 matrix three times using either nine or 14 flashes (instead of 19), decreasing the amount of time to present stimuli. Compared to 9-flash A, 9-flash B decreased the likelihood that neighboring items would flash when the target was not flashing, thereby reducing interference from items adjacent to targets. 14-flash A also reduced adjacent item interference and 14-flash B additionally eliminated successive (double) flashes of the same item. Results showed that accuracy and bit rate of the adaptive system were higher than the non-adaptive system. In addition, 9- and 14-flash B produced significantly higher performance than their respective A conditions. The results also show the trend that the 14-flash B paradigm was better than the 19-flash pattern for naïve users. PMID:21474877

  20. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  1. Using the β-binomial distribution to characterize forest health

    Treesearch

    S.J. Zarnoch; R.L. Anderson; R.M. Sheffield

    1995-01-01

    The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...

  2. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  3. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    ERIC Educational Resources Information Center

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  4. Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data

    ERIC Educational Resources Information Center

    Xi, Nuo; Browne, Michael W.

    2014-01-01

    A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…

  5. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  6. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  7. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history

    EPA Science Inventory

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...

  8. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.

  9. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  10. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  11. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    PubMed

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  12. Simulation-Based Evaluation of Hybridization Network Reconstruction Methods in the Presence of Incomplete Lineage Sorting

    PubMed Central

    Kamneva, Olga K; Rosenberg, Noah A

    2017-01-01

    Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378

  13. Free energy reconstruction from steered dynamics without post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin

    2010-09-20

    Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less

  14. Economic assessment of conventional and conservation tillage practices in different wheat-based cropping systems of Punjab, Pakistan.

    PubMed

    Shahzad, Muhammad; Hussain, Mubshar; Farooq, Muhammad; Farooq, Shahid; Jabran, Khawar; Nawaz, Ahmad

    2017-11-01

    Wheat productivity and profitability is low under conventional tillage systems as they increase the production cost, soil compaction, and the weed infestation. Conservation tillage could be a pragmatic option to sustain the wheat productivity and enhance the profitability on long term basis. This study was aimed to evaluate the economics of different wheat-based cropping systems viz. fallow-wheat, rice-wheat, cotton-wheat, mung bean-wheat, and sorghum-wheat, with zero tillage, conventional tillage, deep tillage, bed sowing (60/30 cm beds and four rows), and bed sowing (90/45 cm beds and six rows). Results indicated that the bed sown wheat had the maximum production cost than other tillage systems. Although both bed sowing treatments incurred the highest production cost, they generated the highest net benefits and benefit: cost ratio (BCR). Rice-wheat cropping system with bed sown wheat (90/45 cm beds with six rows) had the highest net income (4129.7 US$ ha -1 ), BCR (2.87), and marginal rate of return compared with rest of the cropping systems. In contrast, fallow-wheat cropping system incurred the lowest input cost, but had the least economic return. In crux, rice-wheat cropping system with bed sown wheat (90/45 cm beds with six rows) was the best option for getting the higher economic returns. Moreover, double cropping systems within a year are more profitable than sole planting of wheat under all tillage practices.

  15. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  16. [An evaluation of some of the relationships between thought-action fusion, attributional styles, and depressive and obsessive-compulsive symptoms].

    PubMed

    Piri, Serap; Kabakçi, Elif

    2007-01-01

    Thought-action fusion (TAF) is a cognitive bias presumed to underlie the development of obsessional problems. Two domains of TAF have been identified. The first, TAF-moral, is characterized by the belief that having morally unacceptable thoughts is as bad as actually carrying them out. The second, TAF-likelihood, refers to the belief that certain thoughts cause particular events. The event can be related to one's self (likelihood-self) or to someone else (likelihood-others). The other cognitive variable of the study is attributional style. The theory of attributional styles, in terms of the causes of good and bad events, is taken into account especially in the context of depression and has four dimensions: internality-externality, stability-instability, globality-specifity, and importance-unimportance. The first objective of the present study was to investigate the relationships between TAF, and attributional style, and depressive and obsessive-compulsive symptoms. The second objective was to determine the predictors of TAF when the effects of depressive and obsessive-compulsive symptoms are statistically controlled. The sample consisted of 312 students randomly selected from different departments at Hacettepe University. The Thought-Action Fusion Scale (TAFS), Attributional Style Questionnaire (ASQ), Maudsley Obsessive-Compulsive Inventory (MOCI), and Beck Depression Inventory (BDI) were administered to these students. The correlations among all the subtypes of TAF (TAF-moral, likelihood-self, and likelihood-others), and the global attributions for bad events, BDI, and MOCI were significant. In addition, the correlation between TAF-moral and the importance of the attribution for bad events was significant. TAF-likelihood-others and TAF-likelihood-self were predicted by global attributions for bad events and TAF-moral was predicted by the importance of the attributions for bad events. TAF, and attributional styles, and depressive and obsessive-compulsive symptoms may be related to each other. The results also suggest a possible effect of other variables not controlled in this study, both on TAF and the dimensions of attributional styles.

  17. [Arthroscopic double-row reconstruction of high-grade subscapularis tendon tears].

    PubMed

    Plachel, F; Pauly, S; Moroder, P; Scheibel, M

    2018-04-01

    Reconstruction of tendon integrity to maintain glenohumeral joint centration and hence to restore shoulder functional range of motion and to reduce pain. Isolated or combined full-thickness subscapularis tendon tears (≥upper two-thirds of the tendon) without both substantial soft tissue degeneration and cranialization of the humeral head. Chronic tears of the subscapularis tendon with higher grade muscle atrophy, fatty infiltration, and static decentration of the humeral head. After arthroscopic three-sided subscapularis tendon release, two double-loaded suture anchors are placed medially to the humeral footprint. Next to the suture passage, the suture limbs are tied and secured laterally with up to two knotless anchors creating a transosseous-equivalent repair. The affected arm is placed in a shoulder brace with 20° of abduction and slight internal rotation for 6 weeks postoperatively. Rehabilitation protocol including progressive physical therapy from a maximum protection phase to a minimum protection phase is required. Overhead activities are permitted after 6 months. While previous studies have demonstrated superior biomechanical properties and clinical results after double-row compared to single-row and transosseous fixation techniques, further mid- to long-term clinical investigations are needed to confirm these findings.

  18. Maximum-likelihood estimation of parameterized wavefronts from multifocal data

    PubMed Central

    Sakamoto, Julia A.; Barrett, Harrison H.

    2012-01-01

    A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282

  19. Impacts of tree rows on grassland birds and potential nest predators: a removal experiment.

    PubMed

    Ellison, Kevin S; Ribic, Christine A; Sample, David W; Fawcett, Megan J; Dadisman, John D

    2013-01-01

    Globally, grasslands and the wildlife that inhabit them are widely imperiled. Encroachment by shrubs and trees has widely impacted grasslands in the past 150 years. In North America, most grassland birds avoid nesting near woody vegetation. Because woody vegetation fragments grasslands and potential nest predator diversity and abundance is often greater along wooded edge and grassland transitions, we measured the impacts of removing rows of trees and shrubs that intersected grasslands on potential nest predators and the three most abundant grassland bird species (Henslow's sparrow [Ammodramus henslowii], Eastern meadowlark [Sturnella magna], and bobolink [Dolichonyx oryzivorus]) at sites in Wisconsin, U.S.A. We monitored 3 control and 3 treatment sites, for 1 yr prior to and 3 yr after tree row removal at the treatment sites. Grassland bird densities increased (2-4 times for bobolink and Henslow's sparrow) and nesting densities increased (all 3 species) in the removal areas compared to control areas. After removals, Henslow's sparrows nested within ≤50 m of the treatment area, where they did not occur when tree rows were present. Most dramatically, activity by woodland-associated predators nearly ceased (nine-fold decrease for raccoon [Procyon lotor]) at the removals and grassland predators increased (up to 27 times activity for thirteen-lined ground squirrel [Ictidomys tridecemlineatus]). Nest success did not increase, likely reflecting the increase in grassland predators. However, more nests were attempted by all 3 species (175 versus 116) and the number of successful nests for bobolinks and Henslow's sparrows increased. Because of gains in habitat, increased use by birds, greater production of young, and the effective removal of woodland-associated predators, tree row removal, where appropriate based on the predator community, can be a beneficial management action for conserving grassland birds and improving fragmented and degraded grassland ecosystems.

  20. A tree island approach to inferring phylogeny in the ant subfamily Formicinae, with especial reference to the evolution of weaving.

    PubMed

    Johnson, Rebecca N; Agapow, Paul-Michael; Crozier, Ross H

    2003-11-01

    The ant subfamily Formicinae is a large assemblage (2458 species (J. Nat. Hist. 29 (1995) 1037), including species that weave leaf nests together with larval silk and in which the metapleural gland-the ancestrally defining ant character-has been secondarily lost. We used sequences from two mitochondrial genes (cytochrome b and cytochrome oxidase 2) from 18 formicine and 4 outgroup taxa to derive a robust phylogeny, employing a search for tree islands using 10000 randomly constructed trees as starting points and deriving a maximum likelihood consensus tree from the ML tree and those not significantly different from it. Non-parametric bootstrapping showed that the ML consensus tree fit the data significantly better than three scenarios based on morphology, with that of Bolton (Identification Guide to the Ant Genera of the World, Harvard University Press, Cambridge, MA) being the best among these alternative trees. Trait mapping showed that weaving had arisen at least four times and possibly been lost once. A maximum likelihood analysis showed that loss of the metapleural gland is significantly associated with the weaver life-pattern. The graph of the frequencies with which trees were discovered versus their likelihood indicates that trees with high likelihoods have much larger basins of attraction than those with lower likelihoods. While this result indicates that single searches are more likely to find high- than low-likelihood tree islands, it also indicates that searching only for the single best tree may lose important information.

  1. Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.

    PubMed

    Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R

    2018-05-26

    Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. 77 FR 54813 - Safety Zone; Head of the Cuyahoga, U.S. Rowing Masters Head Race National Championship, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-06

    ... Dragon Boat Festival, Cuyahoga River, Cleveland, OH AGENCY: Coast Guard, DHS. ACTION: Temporary final... Dragon Boat Festival. This safety zone is necessary to protect spectators, participants, and vessels from... conjunction with the HOTC, the Cleveland Dragon Boat Festival will take place just north of the Detroit...

  3. 76 FR 81906 - Advance Notice of Proposed Rulemaking Regarding a Competitive Process for Leasing Public Lands...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-29

    ... Solar and Wind Energy Development AGENCY: Bureau of Land Management. ACTION: Advance notice of proposed... to establish a competitive process for leasing public lands for solar and wind energy development... process for issuing Right-of-Way (ROW) leases for solar and wind energy development that is based upon the...

  4. 77 FR 12873 - Notice of Segregation of Public Lands in the State of Arizona Associated With the Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... Quartzsite Solar Energy Project, La Paz County, AZ AGENCY: Bureau of Land Management, Interior. ACTION... connection with the BLM's processing of a right-of-way (ROW) application for Quartzsite Solar Energy, LLC's Quartzsite Solar Energy Project (Proposed Project). This segregation covers approximately 2,013.76 acres of...

  5. 75 FR 66653 - Airworthiness Directives; McDonnell Douglas Corporation Model MD-90-30 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-29

    ... the left and right engine aft mounts with new fasteners, in accordance with the Accomplishment... defects of the upper fasteners of the aft mount support fittings of the left and right engines, and corrective actions if necessary. This new AD requires repetitive replacement of the upper row of fasteners of...

  6. DSN telemetry system performance using a maximum likelihood convolutional decoder

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Kemp, R. P.

    1977-01-01

    Results are described of telemetry system performance testing using DSN equipment and a Maximum Likelihood Convolutional Decoder (MCD) for code rates 1/2 and 1/3, constraint length 7 and special test software. The test results confirm the superiority of the rate 1/3 over that of the rate 1/2. The overall system performance losses determined at the output of the Symbol Synchronizer Assembly are less than 0.5 db for both code rates. Comparison of the performance is also made with existing mathematical models. Error statistics of the decoded data are examined. The MCD operational threshold is found to be about 1.96 db.

  7. Multifrequency InSAR height reconstruction through maximum likelihood estimation of local planes parameters.

    PubMed

    Pascazio, Vito; Schirinzi, Gilda

    2002-01-01

    In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.

  8. Effects of time-shifted data on flight determined stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Steers, S. T.; Iliff, K. W.

    1975-01-01

    Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.

  9. Minimum distance classification in remote sensing

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Landgrebe, D. A.

    1972-01-01

    The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.

  10. Maximum likelihood conjoint measurement of lightness and chroma.

    PubMed

    Rogers, Marie; Knoblauch, Kenneth; Franklin, Anna

    2016-03-01

    Color varies along dimensions of lightness, hue, and chroma. We used maximum likelihood conjoint measurement to investigate how lightness and chroma influence color judgments. Observers judged lightness and chroma of stimuli that varied in both dimensions in a paired-comparison task. We modeled how changes in one dimension influenced judgment of the other. An additive model best fit the data in all conditions except for judgment of red chroma where there was a small but significant interaction. Lightness negatively contributed to perception of chroma for red, blue, and green hues but not for yellow. The method permits quantification of lightness and chroma contributions to color appearance.

  11. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  12. Fitting distributions to microbial contamination data collected with an unequal probability sampling design.

    PubMed

    Williams, M S; Ebel, E D; Cao, Y

    2013-01-01

    The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.

  13. RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models.

    PubMed

    Stamatakis, Alexandros

    2006-11-01

    RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak

  14. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level

    PubMed Central

    Savalei, Victoria; Rhemtulla, Mijke

    2017-01-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371

  15. Determining crop residue type and class using satellite acquired data. M.S. Thesis Progress Report, Jun. 1990

    NASA Technical Reports Server (NTRS)

    Zhuang, Xin

    1990-01-01

    LANDSAT Thematic Mapper (TM) data for March 23, 1987 with accompanying ground truth data for the study area in Miami County, IN were used to determine crop residue type and class. Principle components and spectral ratioing transformations were applied to the LANDSAT TM data. One graphic information system (GIS) layer of land ownership was added to each original image as the eighth band of data in an attempt to improve classification. Maximum likelihood, minimum distance, and neural networks were used to classify the original, transformed, and GIS-enhanced remotely sensed data. Crop residues could be separated from one another and from bare soil and other biomass. Two types of crop residue and four classes were identified from each LANDSAT TM image. The maximum likelihood classifier performed the best classification for each original image without need of any transformation. The neural network classifier was able to improve the classification by incorporating a GIS-layer of land ownership as an eighth band of data. The maximum likelihood classifier was unable to consider this eighth band of data and thus, its results could not be improved by its consideration.

  16. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    PubMed

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  17. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  18. The Diagnostic Accuracy of Special Tests for Rotator Cuff Tear: The ROW Cohort Study

    PubMed Central

    Jain, Nitin B.; Luz, Jennifer; Higgins, Laurence D.; Dong, Yan; Warner, Jon J.P.; Matzkin, Elizabeth; Katz, Jeffrey N.

    2016-01-01

    Objective The aim was to assess diagnostic accuracy of 15 shoulder special tests for rotator cuff tears. Design From 02/2011 to 12/2012, 208 participants with shoulder pain were recruited in a cohort study. Results Among tests for supraspinatus tears, Jobe’s test had a sensitivity of 88% (95% CI=80% to 96%), specificity of 62% (95% CI=53% to 71%), and likelihood ratio of 2.30 (95% CI=1.79 to 2.95). The full can test had a sensitivity of 70% (95% CI=59% to 82%) and a specificity of 81% (95% CI=74% to 88%). Among tests for infraspinatus tears, external rotation lag signs at 0° had a specificity of 98% (95% CI=96% to 100%) and a likelihood ratio of 6.06 (95% CI=1.30 to 28.33), and the Hornblower’s sign had a specificity of 96% (95% CI=93% to 100%) and likelihood ratio of 4.81 (95% CI=1.60 to 14.49). Conclusions Jobe’s test and full can test had high sensitivity and specificity for supraspinatus tears and Hornblower’s sign performed well for infraspinatus tears. In general, special tests described for subscapularis tears have high specificity but low sensitivity. These data can be used in clinical practice to diagnose rotator cuff tears and may reduce the reliance on expensive imaging. PMID:27386812

  19. Association weight matrix for the genetic dissection of puberty in beef cattle.

    PubMed

    Fortes, Marina R S; Reverter, Antonio; Zhang, Yuandan; Collis, Eliza; Nagaraj, Shivashankar H; Jonsson, Nick N; Prayaga, Kishore C; Barris, Wes; Hawken, Rachel J

    2010-08-03

    We describe a systems biology approach for the genetic dissection of complex traits based on applying gene network theory to the results from genome-wide associations. The associations of single-nucleotide polymorphisms (SNP) that were individually associated with a primary phenotype of interest, age at puberty in our study, were explored across 22 related traits. Genomic regions were surveyed for genes harboring the selected SNP. As a result, an association weight matrix (AWM) was constructed with as many rows as genes and as many columns as traits. Each {i, j} cell value in the AWM corresponds to the z-score normalized additive effect of the ith gene (via its neighboring SNP) on the jth trait. Columnwise, the AWM recovered the genetic correlations estimated via pedigree-based restricted maximum-likelihood methods. Rowwise, a combination of hierarchical clustering, gene network, and pathway analyses identified genetic drivers that would have been missed by standard genome-wide association studies. Finally, the promoter regions of the AWM-predicted targets of three key transcription factors (TFs), estrogen-related receptor gamma (ESRRG), Pal3 motif, bound by a PPAR-gamma homodimer, IR3 sites (PPARG), and Prophet of Pit 1, PROP paired-like homeobox 1 (PROP1), were surveyed to identify binding sites corresponding to those TFs. Applied to our case, the AWM results recapitulate the known biology of puberty, captured experimentally validated binding sites, and identified candidate genes and gene-gene interactions for further investigation.

  20. Association weight matrix for the genetic dissection of puberty in beef cattle

    PubMed Central

    Fortes, Marina R. S.; Reverter, Antonio; Zhang, Yuandan; Collis, Eliza; Nagaraj, Shivashankar H.; Jonsson, Nick N.; Prayaga, Kishore C.; Barris, Wes; Hawken, Rachel J.

    2010-01-01

    We describe a systems biology approach for the genetic dissection of complex traits based on applying gene network theory to the results from genome-wide associations. The associations of single-nucleotide polymorphisms (SNP) that were individually associated with a primary phenotype of interest, age at puberty in our study, were explored across 22 related traits. Genomic regions were surveyed for genes harboring the selected SNP. As a result, an association weight matrix (AWM) was constructed with as many rows as genes and as many columns as traits. Each {i, j} cell value in the AWM corresponds to the z-score normalized additive effect of the ith gene (via its neighboring SNP) on the jth trait. Columnwise, the AWM recovered the genetic correlations estimated via pedigree-based restricted maximum-likelihood methods. Rowwise, a combination of hierarchical clustering, gene network, and pathway analyses identified genetic drivers that would have been missed by standard genome-wide association studies. Finally, the promoter regions of the AWM-predicted targets of three key transcription factors (TFs), estrogen-related receptor γ (ESRRG), Pal3 motif, bound by a PPAR-γ homodimer, IR3 sites (PPARG), and Prophet of Pit 1, PROP paired-like homeobox 1 (PROP1), were surveyed to identify binding sites corresponding to those TFs. Applied to our case, the AWM results recapitulate the known biology of puberty, captured experimentally validated binding sites, and identified candidate genes and gene–gene interactions for further investigation. PMID:20643938

  1. Optimization Methods for Spiking Neurons and Networks

    PubMed Central

    Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph

    2011-01-01

    Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265

  2. Phylogenetically marking the limits of the genus Fusarium for post-Article 59 usage

    USDA-ARS?s Scientific Manuscript database

    Fusarium (Hypocreales, Nectriaceae) is one of the most important and systematically challenging groups of mycotoxigenic, plant pathogenic, and human pathogenic fungi. We conducted maximum likelihood (ML), maximum parsimony (MP) and Bayesian (B) analyses on partial nucleotide sequences of genes encod...

  3. Determining the linkage of disease-resistance genes to molecular markers: the LOD-SCORE method revisited with regard to necessary sample sizes.

    PubMed

    Hühn, M

    1995-05-01

    Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.

  4. Application of the two-stage clonal expansion model in characterizing the joint effect of exposure to two carcinogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zielinski, J.M.; Krewski, D.

    1992-12-31

    In this paper, we describe application of the two-stage clonal expansion model to characterize the joint effect of exposure to two carcinogens. This biologically based model of carcinogenesis provides a useful framework for the quantitative description of carcinogenic risks and for defining agents that act as initiators, promoters, and completers. Depending on the mechanism of action, the agent-specific relative risk following exposure to two carcinogens can be additive, multiplicative, or supramultiplicative, with supra-additive relative risk indicating a synergistic effect between the two agents. Maximum-likelihood methods for fitting the two-stage clonal expansion model with intermittent exposure to two carcinogens are describedmore » and illustrated, using data on lung-cancer mortality among Colorado uranium miners exposed to both radon and tobacco smoke.« less

  5. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  6. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  7. Captures of Boll Weevils (Coleoptera: Curculionidae) in Relation to Trap Distance From Cotton Fields.

    PubMed

    Spurgeon, Dale W

    2016-12-01

    The boll weevil (Anthonomus grandis grandis Boheman) has been eradicated from much of the United States, but remains an important pest of cotton (Gossypium spp.) in other parts of the Americas. Where the weevil occurs, the pheromone trap is a key tool for population monitoring or detection. Traditional monitoring programs have placed traps in or near the outermost cotton rows where damage by farm equipment can cause loss of trapping data. Recently, some programs have adopted a trap placement adjacent to but outside monitored fields. The effects of these changes have not been previously reported. Captures of early-season boll weevils by traps near (≤1 m) or far (7-10 m) from the outermost cotton row were evaluated. In 2005, during renewed efforts to eradicate the boll weevil from the Lower Rio Grande Valley of Texas, far traps consistently captured more weevils than traps near cotton. Traps at both placements indicated similar patterns of early-season weevil captures, which were consistent with those previously reported. In 2006, no distinction between trap placements was detected. Early-season patterns of captures in 2006 were again similar for both trap placements, but captures were much lower and less regular compared with those observed in 2005. These results suggest magnitude and likelihood of weevil capture in traps placed away from cotton are at least as high as for traps adjacent to cotton. Therefore, relocation of traps away from the outer rows of cotton should not negatively impact ability to monitor or detect the boll weevil. Published by Oxford University Press on behalf of Entomological Society of America 2016. This work is written by a US Government employee and is in the public domain in the US.

  8. The relationship of thought-action fusion to pathologicial worry and generalized anxiety disorder.

    PubMed

    Hazlett-Stevens, Holly; Zucker, Bonnie G; Craske, Michelle G

    2002-10-01

    Meta-cognitive beliefs associated with pathological worry and generalized anxiety disorder (GAD) may encompass the likelihood subtype of thought-action fusion (TAF), the belief that one's thoughts can influence outside events. In the current study of 494 undergraduate college students, positive correlations between scores on the Penn State Worry Questionnaire (PSWQ) and the two Likelihood subscales of the TAF Scale were found, and participants endorsing at least some DSM-IV diagnostic criteria for GAD scored significantly higher on both TAF-Likelihood subscales than participants reporting no GAD symptoms. However, these TAF scales did not predict GAD diagnostic status with PSWQ included as a predictor. In contrast to previous research, the TAF-Moral scale did not correlate with worry. Relationships between TAF, pathological worry, and meta-cognition are discussed in relation to GAD.

  9. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.

  10. Exploiting Non-sequence Data in Dynamic Model Learning

    DTIC Science & Technology

    2013-10-01

    For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in

  11. 76 FR 66079 - Notice of Intent To Prepare a Supplemental Draft Environmental Impact Statement for the K Road...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-25

    .../barstow/K Road_ Calico_Solar.html. E-mail: CalicoPV[email protected] . Fax: (760) 252-6098. Mail: Bureau of... Calico Solar Project, San Bernardino County, CA AGENCY: Bureau of Land Management, Interior. ACTION... Statement (EIS) for an amendment to the right-of- way (ROW) grant for the K Road Calico Solar Project...

  12. Detection And Classification Of Web Robots With Honeypots

    DTIC Science & Technology

    2016-03-01

    CLASSIFICATION OF WEB ROBOTS WITH HONEYPOTS by Sean F. McKenna March 2016 Thesis Advisor: Neil Rowe Second Reader: Justin P. Rohrer THIS...Master’s thesis 4. TITLE AND SUBTITLE DETECTION AND CLASSIFICATION OF WEB ROBOTS WITH HONEYPOTS 5. FUNDING NUMBERS 6. AUTHOR(S) Sean F. McKenna 7...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Web robots are automated programs that systematically browse the Web , collecting information. Although

  13. Advanced Antennas Enabled by Electromagnetic Metamaterials

    DTIC Science & Technology

    2014-12-01

    radiation patterns of a conical horn antenna and three soft horns with various homogeneous metasurface liners. The maximum cross-polarization level was...inhomogencous metasurface liners covering both the flared horn section and the straight waveguide section. The mctahorn is fed by a circular waveguide...with a diameter of 20 mm. (b) The sizes of the metallic patches at each row of the metasurface in the flared horn section. Both the length and width

  14. IMPROVED TYPE OF FUEL ELEMENT

    DOEpatents

    Monson, H.O.

    1961-01-24

    A radiator-type fuel block assembly is described. It has a hexagonal body of neutron fissionable material having a plurality of longitudinal equal- spaced coolant channels therein aligned in rows parallel to each face of the hexagonal body. Each of these coolant channels is hexagonally shaped with the corners rounded and enlarged and the assembly has a maximum temperature isothermal line around each channel which is approximately straight and equidistant between adjacent channels.

  15. Two new endemic species of Ameiva (Squamata: Teiidae) from the dry forest of northwestern Peru and additional information on Ameiva concolor Ruthven, 1924.

    PubMed

    Koch, Claudia; Venegas, Pablo J; Rödder, Dennis; Flecks, Morris; Böhme, Wolfgang

    2013-12-04

    We describe two new species of Ameiva Meyer, 1795 from the dry forest of the Northern Peruvian Andes. The new species Ameiva nodam sp. nov. and Ameiva aggerecusans sp. nov. share a divided frontal plate and are differentiated from each other and from their congeners based on genetic (12S and 16S rRNA genes) and morphological characteristics. A. nodam sp. nov. has dilated postbrachials, a maximum known snout-vent length of 101 mm, 10 longitudinal rows of ventral plates, 86-113 midbody granules, 25-35 lamellae under the fourth toe, and a color pattern with 5 longitudinal yellow stripes on the dorsum. Ameiva aggerecusans sp. nov. has not or only hardly dilated postbrachials, a maximum known snout-vent length of 99.3 mm, 10-12 longitudinal rows of ventral plates, 73-92 midbody granules, 31-39 lamellae under the fourth toe, and the females and juveniles of the species normally exhibit a cream-colored vertebral stripe on a dark dorsum ground color. We provide information on the intraspecific variation and distribution of A. concolor. Furthermore, we provide information on the environmental niches of the taxa and test for niche conservatism. 

  16. Comparison of isometric exercises for activating latissimus dorsi against the upper body weight.

    PubMed

    Park, Se-yeon; Yoo, Won-gyu; An, Duk-hyun; Oh, Jae-seop; Lee, Jung-hoon; Choi, Bo-ram

    2015-02-01

    Because there is little agreement as to which exercise is the most effective for activating the latissimus dorsi, and its intramuscular components are rarely compared, we investigated the intramuscular components of the latissimus dorsi during both trunk and shoulder exercises. Sixteen male subjects performed four isometric exercises: inverted row, body lifting, trunk extension, and trunk lateral bending. Surface electromyography (sEMG) was used to collect data from the medial and lateral components of the latissimus dorsi, lower trapezius, and the erector spinae at the 12th thoracic level during the isometric exercises. Two-way repeated analysis of variance with two within-subject factors (muscles and exercise conditions) was used to determine the significance of differences between the muscles and differences between exercise variations. The inverted row showed the highest values for the medial latissimus dorsi, which were significantly higher than those of the body lifting or trunk extension exercises. For the lateral latissimus dorsi, lateral bending showed significantly higher muscle activity than the inverted row or trunk extension. During body lifting, the % maximum voluntary isometric contraction (MVIC) of the erector spinae showed the lowest value, significantly lower than those of the other isometric exercises. The inverted row exercise was effective for activating the medial latissimus dorsi versus the shoulder depression and trunk exertion exercises. The lateral bending and body lifting exercises were favorable for activating the lateral component of the latissimus dorsi. Evaluating trunk lateral bending is essential for examining the function of the latissimus dorsi. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Polarization signatures for abandoned agricultural fields in the Manix Basin area of the Mojave Desert

    NASA Technical Reports Server (NTRS)

    Ray, Terrill W.; Farr, Tom G.; Vanzyl, Jakob J.

    1991-01-01

    Polarimetric signatures from abandoned circular alfalfa fields in the Manix Basin area of the Mojave desert show systematic changes with length of abandonment. The obliteration of circular planting rows by surface processes could account for the disappearance of bright 'spokes', which seems to be reflection patterns from remnants of the planting rows, with increasing length of abandonment. An observed shift in the location of the maximum L-band copolarization return away from VV, as well as an increase in surface roughness, both occurring with increasing age of abandonment, seems to be attributable to the formation of wind ripple on the relatively vegetationless fields. A Late Pleistocene/Holocene sand bar deposit, which can be identified in the radar images, is probably responsible for the failure of three fields to match the age sequence patterns in roughness and peak shift.

  18. Multi-Detector Row Computed Tomography Findings of Pelvic Congestion Syndrome Caused by Dilated Ovarian Veins

    PubMed Central

    Eren, Suat

    2010-01-01

    Objective: To evaluate the efficacy of multi-detector row CT (MDCT) on pelvic congestion syndrome (PCS), which is often overlooked or poorly visualized with routine imaging examination. Materials and Methods: We evaluated the MDCT features of 40 patients with PCS (mean age, 45 years; range, 29–60 years) using axial, coronal, sagittal, 3D volume-rendered, and Maximum Intensity Projection MIP images. Results: MDCT revealed pelvic varices and ovarian vein dilatations in all patients. Bilateral ovarian vein dilatation was present in 25 patients, and 15 patients had unilateral dilatation. While 12 cases of secondary pelvic varices occurred simultaneously with a retroaortic left renal vein, 10 cases were due solely to a mass obstruction or stenosis of venous structures. Conclusion: MDCT is an effective tool in the evaluation of PCS, and it has more advantages than other imaging modalities. PMID:25610142

  19. Life analysis of multiroller planetary traction drive

    NASA Technical Reports Server (NTRS)

    Coy, J. J.; Rohn, D. A.; Loewenthal, S. H.

    1981-01-01

    A contact fatigue life analysis was performed for a constant ratio, Nasvytis Multiroller Traction Drive. The analysis was based on the Lundberg-Palmgren method for rolling element bearing life prediction. Life adjustment factors for materials, processing, lubrication and traction were included. The 14.7 to 1 ratio drive consisted of a single stage planetary configuration with two rows of stepped planet rollers of five rollers per row, having a roller cluster diameter of approximately 0.21 m, a width of 0.06 m and a weight of 9 kg. Drive system 10 percent life ranged from 18,800 hours at 16.6 kW (22.2 hp) and 25,000 rpm sun roller speed, to 305 hours at maximum operating conditions of 149 kw (200 hp) and 75,000 rpm sun roller speed. The effect of roller diameter and roller center location on life were determined. It was found that an optimum life geometry exists.

  20. An indecent proposal: the dual functions of indirect speech.

    PubMed

    Chakroff, Aleksandr; Thomas, Kyle A; Haque, Omar S; Young, Liane

    2015-01-01

    People often use indirect speech, for example, when trying to bribe a police officer by asking whether there might be "a way to take care of things without all the paperwork." Recent game theoretic accounts suggest that a speaker uses indirect speech to reduce public accountability for socially risky behaviors. The present studies examine a secondary function of indirect speech use: increasing the perceived moral permissibility of an action. Participants report that indirect speech is associated with reduced accountability for unethical behavior, as well as increased moral permissibility and increased likelihood of unethical behavior. Importantly, moral permissibility was a stronger mediator of the effect of indirect speech on likelihood of action, for judgments of one's own versus others' unethical action. In sum, the motorist who bribes the police officer with winks and nudges may not only avoid public punishment but also maintain the sense that his actions are morally permissible. Copyright © 2014 Cognitive Science Society, Inc.

  1. Interference between Redroot Pigweed (Amaranthus retroflexus L.) and Cotton (Gossypium hirsutum L.): Growth Analysis.

    PubMed

    Ma, Xiaoyan; Wu, Hanwen; Jiang, Weili; Ma, Yajie; Ma, Yan

    2015-01-01

    Redroot pigweed is one of the injurious agricultural weeds on a worldwide basis. Understanding of its interference impact in crop field will provide useful information for weed control programs. The effects of redroot pigweed on cotton at densities of 0, 0.125, 0.25, 0.5, 1, 2, 4, and 8 plants m(-1) of row were evaluated in field experiments conducted in 2013 and 2014 at Institute of Cotton Research, CAAS in China. Redroot pigweed remained taller and thicker than cotton and heavily shaded cotton throughout the growing season. Both cotton height and stem diameter reduced with increasing redroot pigweed density. Moreover, the interference of redroot pigweed resulted in a delay in cotton maturity especially at the densities of 1 to 8 weed plants m(-1) of row, and cotton boll weight and seed numbers per boll were reduced. The relationship between redroot pigweed density and seed cotton yield was described by the hyperbolic decay regression model, which estimated that a density of 0.20-0.33 weed plant m(-1) of row would result in a 50% seed cotton yield loss from the maximum yield. Redroot pigweed seed production per plant or per square meter was indicated by logarithmic response. At a density of 1 plant m(-1) of cotton row, redroot pigweed produced about 626,000 seeds m(-2). Intraspecific competition resulted in density-dependent effects on weed biomass per plant, a range of 430-2,250 g dry weight by harvest. Redroot pigweed biomass ha(-1) tended to increase with increasing weed density as indicated by a logarithmic response. Fiber quality was not significantly influenced by weed density when analyzed over two years; however, the fiber length uniformity and micronaire were adversely affected at density of 1 weed plant m(-1) of row in 2014. The adverse impact of redroot pigweed on cotton growth and development identified in this study has indicated the need of effective redroot pigweed management.

  2. Interference between Redroot Pigweed (Amaranthus retroflexus L.) and Cotton (Gossypium hirsutum L.): Growth Analysis

    PubMed Central

    Ma, Xiaoyan; Wu, Hanwen; Jiang, Weili; Ma, Yajie; Ma, Yan

    2015-01-01

    Redroot pigweed is one of the injurious agricultural weeds on a worldwide basis. Understanding of its interference impact in crop field will provide useful information for weed control programs. The effects of redroot pigweed on cotton at densities of 0, 0.125, 0.25, 0.5, 1, 2, 4, and 8 plants m-1 of row were evaluated in field experiments conducted in 2013 and 2014 at Institute of Cotton Research, CAAS in China. Redroot pigweed remained taller and thicker than cotton and heavily shaded cotton throughout the growing season. Both cotton height and stem diameter reduced with increasing redroot pigweed density. Moreover, the interference of redroot pigweed resulted in a delay in cotton maturity especially at the densities of 1 to 8 weed plants m-1 of row, and cotton boll weight and seed numbers per boll were reduced. The relationship between redroot pigweed density and seed cotton yield was described by the hyperbolic decay regression model, which estimated that a density of 0.20–0.33 weed plant m-1 of row would result in a 50% seed cotton yield loss from the maximum yield. Redroot pigweed seed production per plant or per square meter was indicated by logarithmic response. At a density of 1 plant m-1 of cotton row, redroot pigweed produced about 626,000 seeds m-2. Intraspecific competition resulted in density-dependent effects on weed biomass per plant, a range of 430–2,250 g dry weight by harvest. Redroot pigweed biomass ha-1 tended to increase with increasing weed density as indicated by a logarithmic response. Fiber quality was not significantly influenced by weed density when analyzed over two years; however, the fiber length uniformity and micronaire were adversely affected at density of 1 weed plant m-1 of row in 2014. The adverse impact of redroot pigweed on cotton growth and development identified in this study has indicated the need of effective redroot pigweed management. PMID:26057386

  3. Lateral stability and control derivatives of a jet fighter airplane extracted from flight test data by utilizing maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Steinmetz, G. G.

    1972-01-01

    A method of parameter extraction for stability and control derivatives of aircraft from flight test data, implementing maximum likelihood estimation, has been developed and successfully applied to actual lateral flight test data from a modern sophisticated jet fighter. This application demonstrates the important role played by the analyst in combining engineering judgment and estimator statistics to yield meaningful results. During the analysis, the problems of uniqueness of the extracted set of parameters and of longitudinal coupling effects were encountered and resolved. The results for all flight runs are presented in tabular form and as time history comparisons between the estimated states and the actual flight test data.

  4. Effect of sampling rate and record length on the determination of stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Brenner, M. J.; Iliff, K. W.; Whitman, R. K.

    1978-01-01

    Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.

  5. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  6. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  7. Deterministic quantum annealing expectation-maximization algorithm

    NASA Astrophysics Data System (ADS)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  8. Nonlinear phase noise tolerance for coherent optical systems using soft-decision-aided ML carrier phase estimation enhanced with constellation partitioning

    NASA Astrophysics Data System (ADS)

    Li, Yan; Wu, Mingwei; Du, Xinwei; Xu, Zhuoran; Gurusamy, Mohan; Yu, Changyuan; Kam, Pooi-Yuen

    2018-02-01

    A novel soft-decision-aided maximum likelihood (SDA-ML) carrier phase estimation method and its simplified version, the decision-aided and soft-decision-aided maximum likelihood (DA-SDA-ML) methods are tested in a nonlinear phase noise-dominant channel. The numerical performance results show that both the SDA-ML and DA-SDA-ML methods outperform the conventional DA-ML in systems with constant-amplitude modulation formats. In addition, modified algorithms based on constellation partitioning are proposed. With partitioning, the modified SDA-ML and DA-SDA-ML are shown to be useful for compensating the nonlinear phase noise in multi-level modulation systems.

  9. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  10. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  11. The amplitude and spectral index of the large angular scale anisotropy in the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Ganga, Ken; Page, Lyman; Cheng, Edward; Meyer, Stephan

    1994-01-01

    In many cosmological models, the large angular scale anisotropy in the cosmic microwave background is parameterized by a spectral index, n, and a quadrupolar amplitude, Q. For a Harrison-Peebles-Zel'dovich spectrum, n = 1. Using data from the Far Infrared Survey (FIRS) and a new statistical measure, a contour plot of the likelihood for cosmological models for which -1 less than n less than 3 and 0 equal to or less than Q equal to or less than 50 micro K is obtained. Depending upon the details of the analysis, the maximum likelihood occurs at n between 0.8 and 1.4 and Q between 18 and 21 micro K. Regardless of Q, the likelihood is always less than half its maximum for n less than -0.4 and for n greater than 2.2, as it is for Q less than 8 micro K and Q greater than 44 micro K.

  12. Rate of false conviction of criminal defendants who are sentenced to death

    PubMed Central

    Gross, Samuel R.; O’Brien, Barbara; Hu, Chen; Kennedy, Edward H.

    2014-01-01

    The rate of erroneous conviction of innocent criminal defendants is often described as not merely unknown but unknowable. There is no systematic method to determine the accuracy of a criminal conviction; if there were, these errors would not occur in the first place. As a result, very few false convictions are ever discovered, and those that are discovered are not representative of the group as a whole. In the United States, however, a high proportion of false convictions that do come to light and produce exonerations are concentrated among the tiny minority of cases in which defendants are sentenced to death. This makes it possible to use data on death row exonerations to estimate the overall rate of false conviction among death sentences. The high rate of exoneration among death-sentenced defendants appears to be driven by the threat of execution, but most death-sentenced defendants are removed from death row and resentenced to life imprisonment, after which the likelihood of exoneration drops sharply. We use survival analysis to model this effect, and estimate that if all death-sentenced defendants remained under sentence of death indefinitely, at least 4.1% would be exonerated. We conclude that this is a conservative estimate of the proportion of false conviction among death sentences in the United States. PMID:24778209

  13. Patient Factors Influencing Respiratory-Related Clinician Actions in Chronic Obstructive Pulmonary Disease Screening.

    PubMed

    Wadland, William C; Zubek, Valentina Bayer; Clerisme-Beaty, Emmanuelle M; Ríos-Bedoya, Carlos F; Yawn, Barbara P

    2017-01-01

    The purpose of this study was to identify patient-related factors that may explain the increased likelihood of receiving a respiratory-related clinician action in patients identified to be at risk for chronic obstructive pulmonary disease in a U.S.-based pragmatic study of chronic obstructive pulmonary disease screening. This post hoc analysis (conducted in 2014-2015) of the Screening, Evaluating and Assessing Rate Changes of Diagnosing Respiratory Conditions in Primary Care 1 (SEARCH1) study (conducted in 2010-2011), used the chronic obstructive pulmonary disease Population Screener questionnaire in 112 primary care practices. Anyone with a previous chronic obstructive pulmonary disease diagnosis was excluded. Multivariate logistic regression modeling was used to assess patient factors associated with the likelihood of receiving an respiratory-related clinician action following positive screening. Overall, 994 of 6,497 (15%) screened positive and were considered at risk for chronic obstructive pulmonary disease. However, only 187 of the 994 patients (19%) who screened positive received a respiratory-related clinician action. The chances of receiving a respiratory-related clinician action were significantly increased in patients who visited their physician with a respiratory issue (p<0.05) or had already been prescribed a respiratory medication (p<0.05). Most (81%) patients who screened positive or had a respiratory-related clinician action had one or more comorbidity, including cardiovascular disease (68%), diabetes (30%), depression/anxiety (26%), asthma (11%), and cancer (9%). Routine chronic obstructive pulmonary disease screening appears to promote respiratory-related clinician actions in patients with a high likelihood for disease who have respiratory complaints or already use prescribed respiratory medication. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  14. 32 CFR 651.29 - Determining when to use a CX (screening criteria).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... action has not been segmented to meet the definition of a CX. Segmentation can occur when an action is... action can be too narrowly defined, minimizing potential impacts in an effort to avoid a higher level of... formal Clean Air Act conformity determination is required. (8) Reasonable likelihood of violating any...

  15. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less

  16. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    PubMed Central

    Gopich, Irina V.

    2015-01-01

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692

  17. A Computer Program for Solving a Set of Conditional Maximum Likelihood Equations Arising in the Rasch Model for Questionnaires.

    ERIC Educational Resources Information Center

    Andersen, Erling B.

    A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…

  18. Effect of rainfall timing and tillage on the transport of steroid hormones in runoff from manure amended row crop fields.

    PubMed

    Biswas, Sagor; Kranz, William L; Shapiro, Charles A; Snow, Daniel D; Bartelt-Hunt, Shannon L; Mamo, Mitiku; Tarkalson, David D; Zhang, Tian C; Shelton, David P; van Donk, Simon J; Mader, Terry L

    2017-02-15

    Runoff generated from livestock manure amended row crop fields is one of the major pathways of hormone transport to the aquatic environment. The study determined the effects of manure handling, tillage methods, and rainfall timing on the occurrence and transport of steroid hormones in runoff from the row crop field. Stockpiled and composted manure from hormone treated and untreated animals were applied to test plots and subjected to two rainfall simulation events 30days apart. During the two rainfall simulation events, detection of any steroid hormone or metabolites was identified in 8-86% of runoff samples from any tillage and manure treatment. The most commonly detected hormones were 17β-estradiol, estrone, estriol, testosterone, and α-zearalenol at concentrations ranging up to 100-200ngL -1 . Considering the maximum detected concentrations in runoff, no more than 10% of the applied hormone can be transported through the dissolved phase of runoff. Results from the study indicate that hormones can persist in soils receiving livestock manure over an extended period of time and the dissolved phase of hormone in runoff is not the preferred pathway of transport from the manure applied fields irrespective of tillage treatments and timing of rainfall. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Zonal wavefront sensor with reduced number of rows in the detector array.

    PubMed

    Boruah, Bosanta R; Das, Abhijit

    2011-07-10

    In this paper, we describe a zonal wavefront sensor in which the photodetector array can have a smaller number of rows. The test wavefront is incident on a two-dimensional array of diffraction gratings followed by a single focusing lens. The periodicity and the orientation of the grating rulings of each grating can be chosen such that the +1 order beam from the gratings forms an array of focal spots in the detector plane. We show that by using a square array of zones, it is possible to generate an array of +1 order focal spots having a smaller number of rows, thus reducing the height of the required detector array. The phase profile of the test wavefront can be estimated by measuring the displacements of the +1 order focal spots for the test wavefront relative to the +1 order focal spots for a plane reference wavefront. The narrower width of the photodetector array can offer several advantages, such as a faster frame rate of the wavefront sensor, a reduced amount of cross talk between the nearby detector zones, and a decrease in the maximum thermal noise. We also present experimental results of a proof-of-concept experimental arrangement using the proposed wavefront sensing scheme. © 2011 Optical Society of America

  20. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1993-01-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  1. Monte Carlo studies of ocean wind vector measurements by SCATT: Objective criteria and maximum likelihood estimates for removal of aliases, and effects of cell size on accuracy of vector winds

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.

    1982-01-01

    The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.

  2. Variational Bayesian Parameter Estimation Techniques for the General Linear Model

    PubMed Central

    Starke, Ludger; Ostwald, Dirk

    2017-01-01

    Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572

  3. Genetic distances and phylogenetic trees of different Awassi sheep populations based on DNA sequencing.

    PubMed

    Al-Atiyat, R M; Aljumaah, R S

    2014-08-27

    This study aimed to estimate evolutionary distances and to reconstruct phylogeny trees between different Awassi sheep populations. Thirty-two sheep individuals from three different geographical areas of Jordan and the Kingdom of Saudi Arabia (KSA) were randomly sampled. DNA was extracted from the tissue samples and sequenced using the T7 promoter universal primer. Different phylogenetic trees were reconstructed from 0.64-kb DNA sequences using the MEGA software with the best general time reverse distance model. Three methods of distance estimation were then used. The maximum composite likelihood test was considered for reconstructing maximum likelihood, neighbor-joining and UPGMA trees. The maximum likelihood tree indicated three major clusters separated by cytosine (C) and thymine (T). The greatest distance was shown between the South sheep and North sheep. On the other hand, the KSA sheep as an outgroup showed shorter evolutionary distance to the North sheep population than to the others. The neighbor-joining and UPGMA trees showed quite reliable clusters of evolutionary differentiation of Jordan sheep populations from the Saudi population. The overall results support geographical information and ecological types of the sheep populations studied. Summing up, the resulting phylogeny trees may contribute to the limited information about the genetic relatedness and phylogeny of Awassi sheep in nearby Arab countries.

  4. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  5. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    PubMed

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  6. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  7. Varied applications of a new maximum-likelihood code with complete covariance capability. [FERRET, for data adjustment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1978-01-01

    Applications of a new data-adjustment code are given. The method is based on a maximum-likelihood extension of generalized least-squares methods that allow complete covariance descriptions for the input data and the final adjusted data evaluations. The maximum-likelihood approach is used with a generalized log-normal distribution that provides a way to treat problems with large uncertainties and that circumvents the problem of negative values that can occur for physically positive quantities. The computer code, FERRET, is written to enable the user to apply it to a large variety of problems by modifying only the input subroutine. The following applications are discussed:more » A 75-group a priori damage function is adjusted by as much as a factor of two by use of 14 integral measurements in different reactor spectra. Reactor spectra and dosimeter cross sections are simultaneously adjusted on the basis of both integral measurements and experimental proton-recoil spectra. The simultaneous use of measured reaction rates, measured worths, microscopic measurements, and theoretical models are used to evaluate dosimeter and fission-product cross sections. Applications in the data reduction of neutron cross section measurements and in the evaluation of reactor after-heat are also considered. 6 figures.« less

  8. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  9. Event generators for address event representation transmitters

    NASA Astrophysics Data System (ADS)

    Serrano-Gotarredona, Rafael; Serrano-Gotarredona, Teresa; Linares Barranco, Bernabe

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. In a typical AER transmitter chip, there is an array of neurons that generate events. They send events to a peripheral circuitry (let's call it "AER Generator") that transforms those events to neurons coordinates (addresses) which are put sequentially on an interchip high speed digital bus. This bus includes a parallel multi-bit address word plus a Rqst (request) and Ack (acknowledge) handshaking signals for asynchronous data exchange. There have been two main approaches published in the literature for implementing such "AER Generator" circuits. They differ on the way of handling event collisions coming from the array of neurons. One approach is based on detecting and discarding collisions, while the other incorporates arbitration for sequencing colliding events . The first approach is supposed to be simpler and faster, while the second is able to handle much higher event traffic. In this article we will concentrate on the second arbiter-based approach. Boahen has been publishing several techniques for implementing and improving the arbiter based approach. Originally, he proposed an arbitration squeme by rows, followed by a column arbitration. In this scheme, while one neuron was selected by the arbiters to transmit his event out of the chip, the rest of neurons in the array were freezed to transmit any further events during this time window. This limited the maximum transmission speed. In order to improve this speed, Boahen proposed an improved 'burst mode' scheme. In this scheme after the row arbitration, a complete row of events is pipelined out of the array and arbitered out of the chip at higher speed. During this single row event arbitration, the array is free to generate new events and communicate to the row arbiter, in a pipelined mode. This scheme significantly improves maximum event transmission speed, specially for high traffic situations were speed is more critical. We have analyzed and studied this approach and have detected some shortcomings in the circuits reported by Boahen, which may render some false situations under some statistical conditions. The present paper proposes some improvements to overcome such situations. The improved "AER Generator" has been implemented in an AER transmitter system

  10. Effects of the 2008 high-flow experiment on water quality in Lake Powell and Glen Canyon Dam releases, Utah-Arizona

    USGS Publications Warehouse

    Vernieu, William S.

    2010-01-01

    Under the direction of the Secretary of the Interior, the U.S. Geological Survey`s Grand Canyon Monitoring and Research Center (GCMRC) conducted a high-flow experiment (HFE) at Glen Canyon Dam (GCD) from March 4 through March 9, 2008. This experiment was conducted under enriched sediment conditions in the Colorado River within Grand Canyon and was designed to rebuild sandbars, aid endangered humpback chub (Gila cypha), and benefit various downstream resources, including rainbow trout (Oncorhynchus mykiss), the aquatic food base, riparian vegetation, and archaeological sites. During the experiment, GCD discharge increased to a maximum of 1,160 m3/s and remained at that rate for 2.5 days by near-capacity operation of the hydroelectric powerplant at 736 m3/s, augmented by discharge from the river outlet works (ROW) at 424 m3/s. The ROW releases water from Lake Powell approximately 30 m below the powerplant penstock elevation and bypasses the powerplant turbines. During the HFE, the surface elevation of Lake Powell was reduced by 0.8 m. This report describes studies that were conducted before and after the experiment to determine the effects of the HFE on (1) the stratification in Lake Powell in the forebay immediately upstream of GCD and (2) the water quality of combined GCD releases and changes that occurred through the tailwater below the dam. The effects of the HFE to the water quality and stratigraphy in the water column of the GCD forebay and upstream locations in Lake Powell were minimal, compared to those during the beach/habitat-building flow experiment conducted in 1996, in which high releases of 1,273 m3/s were sustained for a 9-day period. However, during the 2008 HFE, there was evidence of increased advective transport of reservoir water at the penstock withdrawal depth and subsequent mixing of this withdrawal current with water above and below this depth. Reservoir hydrodynamics during the HFE period were largely being controlled by a winter inflow density current, which was moving through the deepest portion of the reservoir and approaching GCD near the end of the experiment. Compared to the beach/habitat-building flow experiment of 1996, the 2008 HFE had less affect on the reservoir because of the decreased volume of discharge from the dam and the different behavior of the winter inflow density current. The operation of the ROW increased the dissolved oxygen (DO) concentration of GCD releases and resulted in DO supersaturation at higher release volumes. The jets of water discharged from the ROW caused these increases. Elevated DO concentrations persisted through the tailwater of the dam to Lees Ferry. At maximum ROW operation, downstream DO concentrations increased to approximately 120 percent of saturation.

  11. On the quirks of maximum parsimony and likelihood on phylogenetic networks.

    PubMed

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2017-03-21

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  13. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  14. Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity

    USGS Publications Warehouse

    Manly, Bryan F.J.; Schmutz, Joel A.

    2001-01-01

    The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.

  15. Missing data methods for dealing with missing items in quality of life questionnaires. A comparison by simulation of personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques applied to the SF-36 in the French 2003 decennial health survey.

    PubMed

    Peyre, Hugo; Leplège, Alain; Coste, Joël

    2011-03-01

    Missing items are common in quality of life (QoL) questionnaires and present a challenge for research in this field. It remains unclear which of the various methods proposed to deal with missing data performs best in this context. We compared personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques using various realistic simulation scenarios of item missingness in QoL questionnaires constructed within the framework of classical test theory. Samples of 300 and 1,000 subjects were randomly drawn from the 2003 INSEE Decennial Health Survey (of 23,018 subjects representative of the French population and having completed the SF-36) and various patterns of missing data were generated according to three different item non-response rates (3, 6, and 9%) and three types of missing data (Little and Rubin's "missing completely at random," "missing at random," and "missing not at random"). The missing data methods were evaluated in terms of accuracy and precision for the analysis of one descriptive and one association parameter for three different scales of the SF-36. For all item non-response rates and types of missing data, multiple imputation and full information maximum likelihood appeared superior to the personal mean score and especially to hot deck in terms of accuracy and precision; however, the use of personal mean score was associated with insignificant bias (relative bias <2%) in all studied situations. Whereas multiple imputation and full information maximum likelihood are confirmed as reference methods, the personal mean score appears nonetheless appropriate for dealing with items missing from completed SF-36 questionnaires in most situations of routine use. These results can reasonably be extended to other questionnaires constructed according to classical test theory.

  16. Negotiation and Capital: Athletes' Use of Power in an Elite Men's Rowing Program

    ERIC Educational Resources Information Center

    Purdy, Laura; Jones, Robyn; Cassidy, Tania

    2009-01-01

    The aim of this paper is to examine how power is given, acquired and used by athletes in the elite sporting context. It focuses on a top-level athlete's reactions to the behaviors of his coaches and how such actions contribute to the creation of a coaching climate, which both influences and "houses" coaching. The paper centers on Sean (a…

  17. Complete Genomes of Bacillus coagulans S-lac and Bacillus subtilis TO-A JPC, Two Phylogenetically Distinct Probiotics

    PubMed Central

    Ramya, T. N. C.; Subramanian, Srikrishna

    2016-01-01

    Several spore-forming strains of Bacillus are marketed as probiotics due to their ability to survive harsh gastrointestinal conditions and confer health benefits to the host. We report the complete genomes of two commercially available probiotics, Bacillus coagulans S-lac and Bacillus subtilis TO-A JPC, and compare them with the genomes of other Bacillus and Lactobacillus. The taxonomic position of both organisms was established with a maximum-likelihood tree based on twenty six housekeeping proteins. Analysis of all probiotic strains of Bacillus and Lactobacillus reveal that the essential sporulation proteins are conserved in all Bacillus probiotic strains while they are absent in Lactobacillus spp. We identified various antibiotic resistance, stress-related, and adhesion-related domains in these organisms, which likely provide support in exerting probiotic action by enabling adhesion to host epithelial cells and survival during antibiotic treatment and harsh conditions. PMID:27258038

  18. Complete Genomes of Bacillus coagulans S-lac and Bacillus subtilis TO-A JPC, Two Phylogenetically Distinct Probiotics.

    PubMed

    Khatri, Indu; Sharma, Shailza; Ramya, T N C; Subramanian, Srikrishna

    2016-01-01

    Several spore-forming strains of Bacillus are marketed as probiotics due to their ability to survive harsh gastrointestinal conditions and confer health benefits to the host. We report the complete genomes of two commercially available probiotics, Bacillus coagulans S-lac and Bacillus subtilis TO-A JPC, and compare them with the genomes of other Bacillus and Lactobacillus. The taxonomic position of both organisms was established with a maximum-likelihood tree based on twenty six housekeeping proteins. Analysis of all probiotic strains of Bacillus and Lactobacillus reveal that the essential sporulation proteins are conserved in all Bacillus probiotic strains while they are absent in Lactobacillus spp. We identified various antibiotic resistance, stress-related, and adhesion-related domains in these organisms, which likely provide support in exerting probiotic action by enabling adhesion to host epithelial cells and survival during antibiotic treatment and harsh conditions.

  19. Analysis of differential selective forces acting on the coat protein (P3) of the plant virus family Luteoviridae.

    PubMed

    Torres, Marina W; Corrêa, Régis L; Schrago, Carlos G

    2005-12-30

    The coat protein (CP) of the family Luteoviridae is directly associated with the success of infection. It participates in various steps of the virus life cycle, such as virion assembly, stability, systemic infection, and transmission. Despite its importance, extensive studies on the molecular evolution of this protein are lacking. In the present study, we investigate the action of differential selective forces on the CP coding region using maximum likelihood methods. We found that the protein is subjected to heterogeneous selective pressures and some sites may be evolving near neutrality. Based on the proposed 3-D model of the CP S-domain, we showed that nearly neutral sites are predominantly located in the region of the protein that faces the interior of the capsid, in close contact with the viral RNA, while highly conserved sites are mainly part of beta-strands, in the protein's major framework.

  20. Grey-box modelling of aeration tank settling.

    PubMed

    Bechman, Henrik; Nielsen, Marinus K; Poulsen, Niels Kjølstad; Madsen, Henrik

    2002-04-01

    A model of the concentrations of suspended solids (SS) in the aeration tanks and in the effluent from these during Aeration tank settling (ATS) operation is established. The model is based on simple SS mass balances, a model of the sludge settling and a simple model of how the SS concentration in the effluent from the aeration tanks depends on the actual concentrations in the tanks and the sludge blanket depth. The model is formulated in continuous time by means of stochastic differential equations with discrete-time observations. The parameters of the model are estimated using a maximum likelihood method from data from an alternating BioDenipho waste water treatment plant (WWTP). The model is an important tool for analyzing ATS operation and for selecting the appropriate control actions during ATS, as the model can be used to predict the SS amounts in the aeration tanks as well as in the effluent from the aeration tanks.

  1. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Comparison of image deconvolution algorithms on simulated and laboratory infrared images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, D.

    1994-11-15

    We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.

  3. Testing deep reticulate evolution in Amaryllidaceae Tribe Hippeastreae (Asparagales) with ITS and chloroplast sequence data

    USDA-ARS?s Scientific Manuscript database

    The phylogeny of Amaryllidaceae tribe Hippeastreae was inferred using chloroplast (3’ycf1, ndhF, trnL-F) and nuclear (ITS rDNA) sequence data under maximum parsimony and maximum likelihood frameworks. Network analyses were applied to resolve conflicting signals among data sets and putative scenarios...

  4. Phylogenetic analyses of RPB1 and RPB2 support a middle Cretaceous origin for a clade comprising all agriculturally and medically important fusaria

    USDA-ARS?s Scientific Manuscript database

    Fusarium (Hypocreales, Nectriaceae) is one of the most economically important and systematically challenging groups of mycotoxigenic phytopathogens and emergent human pathogens. We conducted maximum likelihood (ML), maximum parsimony (MP) and Bayesian (B) analyses on partial RNA polymerase largest (...

  5. Moral Identity Predicts Doping Likelihood via Moral Disengagement and Anticipated Guilt.

    PubMed

    Kavussanu, Maria; Ring, Christopher

    2017-08-01

    In this study, we integrated elements of social cognitive theory of moral thought and action and the social cognitive model of moral identity to better understand doping likelihood in athletes. Participants (N = 398) recruited from a variety of team sports completed measures of moral identity, moral disengagement, anticipated guilt, and doping likelihood. Moral identity predicted doping likelihood indirectly via moral disengagement and anticipated guilt. Anticipated guilt about potential doping mediated the relationship between moral disengagement and doping likelihood. Our findings provide novel evidence to suggest that athletes, who feel that being a moral person is central to their self-concept, are less likely to use banned substances due to their lower tendency to morally disengage and the more intense feelings of guilt they expect to experience for using banned substances.

  6. Environmental Assessment and Finding of No Significant Impact: Western's Hoover Dam Bypass Project Phase II (Double-Circuiting a Portion of the Hoover-Mead No.5 and No.7 230-kV Transmission Lines with the Henderson-Mead No.1 230-kV Transmission Line, Clark County, Nevada)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N /A

    2003-10-27

    The U.S. Highway 93 (U.S. 93) Hoover Dam Bypass Project calls for the U.S. Department of Energy (DOE) Western Area Power Administration (Western) to remove its Arizona and Nevada (A&N) Switchyard. As a result of this action, Western must reconfigure its existing electrical transmission system in the Hoover Dam area. Western proposes to double-circuit a portion of the Hoover-Mead No.5 and No.7 230-kV Transmission Lines with the Henderson-Mead No.1 Transmission Line (see Figure 1-1). Double-circuiting is the placement of two separate electrical circuits, typically in the form of three separate conductors or bundles of conductors, on the same set ofmore » transmission line structures. The old Henderson-Hoover 230-kV Transmission Line would become the new Henderson-Mead No.1 and would extend approximately eight miles to connect with the Mead Substation. Western owns, operates, and maintains the Hoover-Mead No.5 and No.7, and Henderson-Hoover electrical power transmission lines. Additionally, approximately 0.25 miles of new right-of-way (ROW) would be needed for the Henderson-Mead No.1 when it transfers from double-circuiting with the Hoover-Mead No.7 to the Hoover-Mead No.5 at the Boulder City Tap. The proposed project would also involve a new transmission line ROW and structures where the Henderson-Mead No.1 will split from the Hoover-Mead No.5 and enter the northeast corner of the Mead Substation. Lastly, Western has proposed adding fiber optic overhead ground wire from the Hoover Power Plant to the Mead Substation on to the Henderson-Mead No.1, Hoover-Mead No.5 and No.7 Transmission Lines. The proposed project includes replacing existing transmission line tower structures, installing new structures, and adding new electrical conductors and fiber optic cables. As a consequence of these activities, ground disturbance may result from grading areas for structure placement, constructing new roads, improving existing roads for vehicle and equipment access, and from installing structures, conductors, and fiber optic cables. Project construction activities would be conducted within the existing 200-foot transmission line ROW and 50-foot access road ROW, although new spur access roads could occur outside of existing ROWs. As lead Federal agency for this action under National Environmental Policy Act (NEPA), Western must ensure that adverse environmental effects on Federal and non-Federal lands and resources are avoided or minimized. This Environmental Assessment (EA) is intended to be a concise public document that assesses the probable and known impacts to the environment from Western's Proposed Action and alternatives, and reaches a conclusion about the significance of the impacts. This EA was prepared in compliance with NEPA regulations published by the Council on Environmental Quality (40 CFR 1500-1508) and implementing procedures of the Department of Energy (10 CFR 1021).« less

  7. Multiple-hit parameter estimation in monolithic detectors.

    PubMed

    Hunter, William C J; Barrett, Harrison H; Lewellen, Tom K; Miyaoka, Robert S

    2013-02-01

    We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%-12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied.

  8. The Role of Persuasive Arguments in Changing Affirmative Action Attitudes and Expressed Behavior in Higher Education

    ERIC Educational Resources Information Center

    White, Fiona A.; Charles, Margaret A.; Nelson, Jacqueline K.

    2008-01-01

    The research reported in this article examined the conditions under which persuasive arguments are most effective in changing university students' attitudes and expressed behavior with respect to affirmative action (AA). The conceptual framework was a model that integrated the theory of reasoned action and the elaboration likelihood model of…

  9. Buffering capability and limitations in low dispersion photonic crystal waveguides with elliptical airholes.

    PubMed

    Long, Fang; Tian, Huiping; Ji, Yuefeng

    2010-09-01

    A low dispersion photonic crystal waveguide with triangular lattice elliptical airholes is proposed for compact, high-performance optical buffering applications. In the proposed structure, we obtain a negligible-dispersion bandwidth with constant group velocity ranging from c/41 to c/256, by optimizing the major and minor axes of bulk elliptical holes and adjusting the position and the hole size of the first row adjacent to the defect. In addition, the limitations of buffer performance in a dispersion engineering waveguide are well studied. The maximum buffer capacity and the maximum data rate can reach as high as 262bits and 515 Gbits/s, respectively. The corresponding delay time is about 255.4ps.

  10. Performance of Axial-Flow Supersonic Compressor on XJ-55-FF-1 Turbojet Engine. I - Preliminary Performance of Compressor. 1; Preliminary Performance of Compressor

    NASA Technical Reports Server (NTRS)

    Hartmann, Melvin J.; Graham, Robert C.

    1949-01-01

    An investigation was conducted to determine the performance characteristics of the axial-flow supersonic compressor of the XJ-55-FF-1 turbo Jet engine. The test unit consisted of a row of inlet guide vanes and a supersonic rotor; the stator vanes after the rotor were omitted. The maximum pressure ratio produced in the single stage was 2.28 at an equivalent tip speed or 1814 feet per second with an adiabatic efficiency of approximately 0.61, equivalent weight flow of 13.4 pounds per second. The maximum efficiency of 0.79 was obtained at an equivalent tip speed of 801 feet per second.

  11. Input-output mapping reconstruction of spike trains at dorsal horn evoked by manual acupuncture

    NASA Astrophysics Data System (ADS)

    Wei, Xile; Shi, Dingtian; Yu, Haitao; Deng, Bin; Lu, Meili; Han, Chunxiao; Wang, Jiang

    2016-12-01

    In this study, a generalized linear model (GLM) is used to reconstruct mapping from acupuncture stimulation to spike trains driven by action potential data. The electrical signals are recorded in spinal dorsal horn after manual acupuncture (MA) manipulations with different frequencies being taken at the “Zusanli” point of experiment rats. Maximum-likelihood method is adopted to estimate the parameters of GLM and the quantified value of assumed model input. Through validating the accuracy of firings generated from the established GLM, it is found that the input-output mapping of spike trains evoked by acupuncture can be successfully reconstructed for different frequencies. Furthermore, via comparing the performance of several GLMs based on distinct inputs, it suggests that input with the form of half-sine with noise can well describe the generator potential induced by acupuncture mechanical action. Particularly, the comparison of reproducing the experiment spikes for five selected inputs is in accordance with the phenomenon found in Hudgkin-Huxley (H-H) model simulation, which indicates the mapping from half-sine with noise input to experiment spikes meets the real encoding scheme to some extent. These studies provide us a new insight into coding processes and information transfer of acupuncture.

  12. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  13. Sparse representation and dictionary learning penalized image reconstruction for positron emission tomography.

    PubMed

    Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei

    2015-01-21

    Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.

  14. A maximum likelihood analysis of the CoGeNT public dataset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelso, Chris, E-mail: ckelso@unf.edu

    The CoGeNT detector, located in the Soudan Underground Laboratory in Northern Minnesota, consists of a 475 grams (fiducial mass of 330 grams) target mass of p-type point contact germanium detector that measures the ionization charge created by nuclear recoils. This detector has searched for recoils created by dark matter since December of 2009. We analyze the public dataset from the CoGeNT experiment to search for evidence of dark matter interactions with the detector. We perform an unbinned maximum likelihood fit to the data and compare the significance of different WIMP hypotheses relative to each other and the null hypothesis ofmore » no WIMP interactions. This work presents the current status of the analysis.« less

  15. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  16. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  17. A real-time digital program for estimating aircraft stability and control parameters from flight test data by using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Mayhew, S. C.

    1973-01-01

    A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.

  18. Impacts of tree rows on grassland birds & potential nest predators: A removal experiment

    USGS Publications Warehouse

    Ellison, Kevin S.; Ribic, Christine; Sample, David W.; Fawcett, Megan J.; Dadisman, John D.

    2013-01-01

    Globally, grasslands and the wildlife that inhabit them are widely imperiled. Encroachment by shrubs and trees has widely impacted grasslands in the past 150 years. In North America, most grassland birds avoid nesting near woody vegetation. Because woody vegetation fragments grasslands and potential nest predator diversity and abundance is often greater along wooded edge and grassland transitions, we measured the impacts of removing rows of trees and shrubs that intersected grasslands on potential nest predators and the three most abundant grassland bird species (Henslow’s sparrow [Ammodramus henslowii], Eastern meadowlark [Sturnella magna], and bobolink [Dolichonyx oryzivorus]) at sites in Wisconsin, U.S.A. We monitored 3 control and 3 treatment sites, for 1 yr prior to and 3 yr after tree row removal at the treatment sites. Grassland bird densities increased (2–4 times for bobolink and Henslow’s sparrow) and nesting densities increased (all 3 species) in the removal areas compared to control areas. After removals, Henslow’s sparrows nested within ≤50 m of the treatment area, where they did not occur when tree rows were present. Most dramatically, activity by woodland-associated predators nearly ceased (nine-fold decrease for raccoon [Procyon lotor]) at the removals and grassland predators increased (up to 27 times activity for thirteen-lined ground squirrel [Ictidomys tridecemlineatus]). Nest success did not increase, likely reflecting the increase in grassland predators. However, more nests were attempted by all 3 species (175 versus 116) and the number of successful nests for bobolinks and Henslow’s sparrows increased. Because of gains in habitat, increased use by birds, greater production of young, and the effective removal of woodland-associated predators, tree row removal, where appropriate based on the predator community, can be a beneficial management action for conserving grassland birds and improving fragmented and degraded grassland ecosystems.

  19. Impacts of Tree Rows on Grassland Birds and Potential Nest Predators: A Removal Experiment

    PubMed Central

    Ellison, Kevin S.; Ribic, Christine A.; Sample, David W.; Fawcett, Megan J.; Dadisman, John D.

    2013-01-01

    Globally, grasslands and the wildlife that inhabit them are widely imperiled. Encroachment by shrubs and trees has widely impacted grasslands in the past 150 years. In North America, most grassland birds avoid nesting near woody vegetation. Because woody vegetation fragments grasslands and potential nest predator diversity and abundance is often greater along wooded edge and grassland transitions, we measured the impacts of removing rows of trees and shrubs that intersected grasslands on potential nest predators and the three most abundant grassland bird species (Henslow’s sparrow [Ammodramus henslowii], Eastern meadowlark [Sturnella magna], and bobolink [Dolichonyx oryzivorus]) at sites in Wisconsin, U.S.A. We monitored 3 control and 3 treatment sites, for 1 yr prior to and 3 yr after tree row removal at the treatment sites. Grassland bird densities increased (2–4 times for bobolink and Henslow’s sparrow) and nesting densities increased (all 3 species) in the removal areas compared to control areas. After removals, Henslow’s sparrows nested within ≤50 m of the treatment area, where they did not occur when tree rows were present. Most dramatically, activity by woodland-associated predators nearly ceased (nine-fold decrease for raccoon [Procyon lotor]) at the removals and grassland predators increased (up to 27 times activity for thirteen-lined ground squirrel [Ictidomys tridecemlineatus]). Nest success did not increase, likely reflecting the increase in grassland predators. However, more nests were attempted by all 3 species (175 versus 116) and the number of successful nests for bobolinks and Henslow’s sparrows increased. Because of gains in habitat, increased use by birds, greater production of young, and the effective removal of woodland-associated predators, tree row removal, where appropriate based on the predator community, can be a beneficial management action for conserving grassland birds and improving fragmented and degraded grassland ecosystems. PMID:23565144

  20. Estimation of Efficiency of the Cooling Channel of the Nozzle Blade of Gas-Turbine Engines

    NASA Astrophysics Data System (ADS)

    Vikulin, A. V.; Yaroslavtsev, N. L.; Zemlyanaya, V. A.

    2018-02-01

    The main direction of improvement of gas-turbine plants (GTP) and gas-turbine engines (GTE) is increasing the gas temperature at the turbine inlet. For the solution of this problem, promising systems of intensification of heat exchange in cooled turbine blades are developed. With this purpose, studies of the efficiency of the cooling channel of the nozzle blade in the basic modification and of the channel after constructive measures for improvement of the cooling system by the method of calorimetry in a liquid-metal thermostat were conducted. The combined system of heat-exchange intensification with the complicated scheme of branched channels is developed; it consists of a vortex matrix and three rows of inclined intermittent trip strips. The maximum value of hydraulic resistance ξ is observed at the first row of the trip strips, which is connected with the effect of dynamic impact of airflow on the channel walls, its turbulence, and rotation by 117° at the inlet to the channels formed by the trip strips. These factors explain the high value of hydraulic resistance equal to 3.7-3.4 for the first row of the trip strips. The obtained effect was also confirmed by the results of thermal tests, i.e., the unevenness of heat transfer on the back and on the trough of the blade is observed at the first row of the trip strips, which amounts 8-12%. This unevenness has a fading character; at the second row of the trip strips, it amounts to 3-7%, and it is almost absent at the third row. At the area of vortex matrix, the intensity of heat exchange on the blade back is higher as compared to the trough, which is explained by the different height of the matrix ribs on its opposite sides. The design changes in the nozzle blade of basic modification made it possible to increase the intensity of heat exchange by 20-50% in the area of the vortex matrix and by 15-30% on the section of inclined intermittent trip strips. As a result of research, new criteria dependences for the complicated systems of heat exchange intensification were obtained. The design of nozzle blades can be used when developing the promising high-temperature gas turbines.

  1. Accounting for complementarity to maximize monitoring power for species management.

    PubMed

    Tulloch, Ayesha I T; Chadès, Iadine; Possingham, Hugh P

    2013-10-01

    To choose among conservation actions that may benefit many species, managers need to monitor the consequences of those actions. Decisions about which species to monitor from a suite of different species being managed are hindered by natural variability in populations and uncertainty in several factors: the ability of the monitoring to detect a change, the likelihood of the management action being successful for a species, and how representative species are of one another. However, the literature provides little guidance about how to account for these uncertainties when deciding which species to monitor to determine whether the management actions are delivering outcomes. We devised an approach that applies decision science and selects the best complementary suite of species to monitor to meet specific conservation objectives. We created an index for indicator selection that accounts for the likelihood of successfully detecting a real trend due to a management action and whether that signal provides information about other species. We illustrated the benefit of our approach by analyzing a monitoring program for invasive predator management aimed at recovering 14 native Australian mammals of conservation concern. Our method selected the species that provided more monitoring power at lower cost relative to the current strategy and traditional approaches that consider only a subset of the important considerations. Our benefit function accounted for natural variability in species growth rates, uncertainty in the responses of species to the prescribed action, and how well species represent others. Monitoring programs that ignore uncertainty, likelihood of detecting change, and complementarity between species will be more costly and less efficient and may waste funding that could otherwise be used for management. © 2013 Society for Conservation Biology.

  2. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Maximum likelihood inference implies a high, not a low, ancestral haploid chromosome number in Araceae, with a critique of the bias introduced by ‘x’

    PubMed Central

    Cusimano, Natalie; Sousa, Aretuza; Renner, Susanne S.

    2012-01-01

    Background and Aims For 84 years, botanists have relied on calculating the highest common factor for series of haploid chromosome numbers to arrive at a so-called basic number, x. This was done without consistent (reproducible) reference to species relationships and frequencies of different numbers in a clade. Likelihood models that treat polyploidy, chromosome fusion and fission as events with particular probabilities now allow reconstruction of ancestral chromosome numbers in an explicit framework. We have used a modelling approach to reconstruct chromosome number change in the large monocot family Araceae and to test earlier hypotheses about basic numbers in the family. Methods Using a maximum likelihood approach and chromosome counts for 26 % of the 3300 species of Araceae and representative numbers for each of the other 13 families of Alismatales, polyploidization events and single chromosome changes were inferred on a genus-level phylogenetic tree for 113 of the 117 genera of Araceae. Key Results The previously inferred basic numbers x = 14 and x = 7 are rejected. Instead, maximum likelihood optimization revealed an ancestral haploid chromosome number of n = 16, Bayesian inference of n = 18. Chromosome fusion (loss) is the predominant inferred event, whereas polyploidization events occurred less frequently and mainly towards the tips of the tree. Conclusions The bias towards low basic numbers (x) introduced by the algebraic approach to inferring chromosome number changes, prevalent among botanists, may have contributed to an unrealistic picture of ancestral chromosome numbers in many plant clades. The availability of robust quantitative methods for reconstructing ancestral chromosome numbers on molecular phylogenetic trees (with or without branch length information), with confidence statistics, makes the calculation of x an obsolete approach, at least when applied to large clades. PMID:22210850

  4. Associations between Poor Sleep Quality and Stages of Change of Multiple Health Behaviors among Participants of Employee Wellness Program.

    PubMed

    Hui, Siu-Kuen Azor; Grandner, Michael A

    2015-01-01

    Using the Transtheoretical Model of behavioral change, this study evaluates the relationship between sleep quality and the motivation and maintenance processes of healthy behavior change. The current study is an analysis of data collected in 2008 from an online health risk assessment (HRA) survey completed by participants of the Kansas State employee wellness program (N=13,322). Using multinomial logistic regression, associations between self-reported sleep quality and stages of change (i.e. precontemplation, contemplation, preparation, action, maintenance) in five health behaviors (stress management, weight management, physical activities, alcohol use, and smoking) were analyzed. Adjusted for covariates, poor sleep quality was associated with an increased likelihood of contemplation, preparation, and in some cases action stage when engaging in the health behavior change process, but generally a lower likelihood of maintenance of the healthy behavior. The present study demonstrated that poor sleep quality was associated with an elevated likelihood of contemplating or initiating behavior change, but a decreased likelihood of maintaining healthy behavior change. It is important to include sleep improvement as one of the lifestyle management interventions offered in EWP to comprehensively reduce health risks and promote the health of a large employee population.

  5. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  6. Ultrastructural and molecular characterization of Glugea serranus n. sp., a microsporidian infecting the blacktail comber, Serranus atricauda (Teleostei: Serranidae), in the Madeira Archipelago (Portugal).

    PubMed

    Casal, Graça; Rocha, Sónia; Costa, Graça; Al-Quraishy, Saleh; Azevedo, Carlos

    2016-10-01

    A new microsporidian infecting the connective tissue of the coelomic cavity of the blacktail comber Serranus atricauda, in the Madeira Archipelago (Portugal), is described on the basis of morphological, ultrastructural, and molecular features. The microsporidian formed large whitish xenomas adhering to the peritoneal visceral organs of the host. Each xenoma consisted of a single hypertrophic cell, in the cytoplasm of which mature spores proliferated within parasitophorous vacuoles surrounded by numerous collagen fibers. Mature spores were ellipsoidal and uninucleated, measuring an average of 6.5 ± 0.5 μm in length and 3.4 ± 0.6 μm in width. The anchoring disk of the polar filament was subterminal, laterally shifted from the anterior pole of the spore. The isofilar polar filament coiled in 18-19 turns, forming two rows that surrounded the posterior vacuole. The latter occupied about one third of the spore length. The polaroplast surrounding the apical and uncoiled portion of the polar filament displayed two distinct regions: a lamellar region and an electron-dense globule. Molecular analysis of the rRNA genes, including the internal transcribed spacer region, and phylogenetic analysis using maximum likelihood and neighbor joining demonstrated that this microsporidian parasite clustered with some Glugea species. Based on the differences found both at the morphological and molecular levels, to other members of the genus Glugea, the microsporidian infecting the blacktail comber is considered a new species, thus named Glugea serranus n. sp.

  7. Description of a new catfish genus (Siluriformes, Loricariidae) from the Tocantins River basin in central Brazil, with comments on the historical zoogeography of the new taxon

    PubMed Central

    Silva, Gabriel S. C.; Roxo, Fábio F.; Ochoa, Luz E.; Oliveira, Claudio

    2016-01-01

    Abstract This study presents the description of a new genus of the catfish subfamily Neoplecostominae from the Tocantins River basin. It can be distinguished from other neoplecostomine genera by the presence of (1) three hypertrophied bicuspid odontodes on the lateral portion of the body (character apparently present in mature males); (2) a large area without odontodes around the snout; (3) a post-dorsal ridge on the caudal peduncle; (4) a straight tooth series in the dentary and premaxillary rows; (5) the absence of abdominal plates; (6) a conspicuous series of enlarged papillae just posterior to the dentary teeth; and (7) caudal peduncle ellipsoid in cross section. We used maximum likelihood and Bayesian methods to estimate a time-calibrated tree with the published data on 116 loricariid species using one nuclear and three mitochondrial genes, and we used parametric biogeographic analyses (DEC and DECj models) to estimate ancestral geographic ranges and to infer the colonization routes of the new genus and the other neoplecostomines in the Tocantins River and the hydrographic systems of southeastern Brazil. Our phylogenetic results indicate that the new genus and species is a sister taxon of all the other members of the Neoplecostominae, originating during the Eocene at 47.5 Mya (32.7–64.5 Mya 95% HPD). The present distribution of the new genus and other neoplecostomines may be the result of a historical connection between the drainage basins of the Paraguay and Paraná rivers and the Amazon basin, mainly through headwater captures. PMID:27408594

  8. Morphological and molecular data reveal a new species of Neoechinorhynchus (Acanthocephala: Neoechinorhynchidae) from Dormitator maculatus in the Gulf of Mexico.

    PubMed

    Pinacho-Pinacho, Carlos Daniel; Sereno-Uribe, Ana L; García-Varela, Martín

    2014-12-01

    Neoechinorhynchus (Neoechinorhynchus) mexicoensis sp. n. is described from the intestine of Dormitator maculatus (Bloch 1792) collected in 5 coastal localities from the Gulf of Mexico. The new species is mainly distinguished from the other 33 described species of Neoechinorhynchus from the Americas associated with freshwater, marine and brackish fishes by having smaller middle and posterior hooks and possessing a small proboscis with three rows of six hooks each, apical hooks longer than other hooks and extending to the same level as the posterior hooks, 1 giant nucleus in the ventral body wall and females with eggs longer than other congeneric species. Sequences of the internal transcribed spacer (ITS) and the large subunit (LSU) of ribosomal DNA including the domain D2+D3 were used independently to corroborate the morphological distinction among the new species and other congeneric species associated with freshwater and brackish water fish from Mexico. The genetic divergence estimated among congeneric species ranged from 7.34 to 44% for ITS and from 1.65 to 32.9% for LSU. Maximum likelihood and Bayesian inference analyses with each dataset showed that the 25 specimens analyzed from 5 localities of the coast of the Gulf of Mexico parasitizing D. maculatus represent an independent clade with strong bootstrap support and posterior probabilities. The morphological evidence, plus the monophyly in the phylogenetic analyses, indicates that the acanthocephalans collected from intestine of D. maculatus from the Gulf of Mexico represent a new species, herein named N. (N.) mexicoensis sp. n. Copyright © 2014. Published by Elsevier Ireland Ltd.

  9. Imaging Electrically Evoked Micromechanical Motion within the Organ of Corti of the Excised Gerbil Cochlea

    PubMed Central

    Karavitaki, K. Domenica; Mountain, David C.

    2007-01-01

    The outer hair cell (OHC) of the mammalian inner ear exhibits an unusual form of somatic motility that can follow membrane-potential changes at acoustic frequencies. The cellular forces that produce this motility are believed to amplify the motion of the cochlear partition, thereby playing a key role in increasing hearing sensitivity. To better understand the role of OHC somatic motility in cochlear micromechanics, we developed an excised cochlea preparation to visualize simultaneously the electrically-evoked motion of hundreds of cells within the organ of Corti (OC). The motion was captured using stroboscopic video microscopy and quantified using cross-correlation techniques. The OC motion at ∼2–6 octaves below the characteristic frequency of the region was complex: OHC, Deiter's cell, and Hensen's cell motion were hundreds of times larger than the tectorial membrane, reticular lamina (RL), and pillar cell motion; the inner rows of OHCs moved antiphasic to the outer row; OHCs pivoted about the RL; and Hensen's cells followed the motion of the outer row of OHCs. Our results suggest that the effective stimulus to the inner hair cell hair bundles results not from a simple OC lever action, as assumed by classical models, but by a complex internal motion coupled to the RL. PMID:17277194

  10. [Effects of irrigation and planting patterns on photosynthetic characteristics of flag leaf and yield at late growth stages of winter wheat].

    PubMed

    Dong, Hao; Bi, Jun; Xia, Guang-Li; Zhou, Xun-Bo; Chen, Yu-Hai

    2014-08-01

    High-yield winter wheat cultivar Jimai 22 was used to study effects of irrigation and planting patterns on water consumption characteristics and photosynthetic characteristics of winter wheat in field from 2009 to 2011. Three different planting patterns (uniform row, wide-narrow row and furrow) and four irrigation schedules (W0, no irrigation; W1, irrigation at jointing stage; W2, irrigations at jointing and anthesis stages; W3, irrigation at jointing, anthesis and milking stages. Each irrigation rate was 60 mm) were designed in the experiment. Results showed that, with the increasing of irrigation amount, flag leaf area, net photosynthesis rate, maximum photochemical efficiency and actual light transformation efficiency at late growth stages of winter wheat increased. Compared with W0 treatment, the other irrigation treatments had higher grain yields, but lower water use efficiencies. Under the same irrigation condition, the flag leaf net photosynthesis, maximum photochemical efficiency and actual light transformation efficiency were much higher in furrow pattern. Grain yields of winter wheat under furrow pattern and W2 treatment were significantly higher than that of the other treatments. Taking grain yield and WUE into consideration, furrow pattern combined with irrigation at jointing and anthesis stages might be the optimal water-saving and planting mode for the winter wheat production in North China Plain.

  11. Maximum likelihood decoding of Reed Solomon Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudan, M.

    We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihoodmore » decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.« less

  12. Mapping grass communities based on multi-temporal Landsat TM imagery and environmental variables

    NASA Astrophysics Data System (ADS)

    Zeng, Yuandi; Liu, Yanfang; Liu, Yaolin; de Leeuw, Jan

    2007-06-01

    Information on the spatial distribution of grass communities in wetland is increasingly recognized as important for effective wetland management and biological conservation. Remote sensing techniques has been proved to be an effective alternative to intensive and costly ground surveys for mapping grass community. However, the mapping accuracy of grass communities in wetland is still not preferable. The aim of this paper is to develop an effective method to map grass communities in Poyang Lake Natural Reserve. Through statistic analysis, elevation is selected as an environmental variable for its high relationship with the distribution of grass communities; NDVI stacked from images of different months was used to generate Carex community map; the image in October was used to discriminate Miscanthus and Cynodon communities. Classifications were firstly performed with maximum likelihood classifier using single date satellite image with and without elevation; then layered classifications were performed using multi-temporal satellite imagery and elevation with maximum likelihood classifier, decision tree and artificial neural network separately. The results show that environmental variables can improve the mapping accuracy; and the classification with multitemporal imagery and elevation is significantly better than that with single date image and elevation (p=0.001). Besides, maximum likelihood (a=92.71%, k=0.90) and artificial neural network (a=94.79%, k=0.93) perform significantly better than decision tree (a=86.46%, k=0.83).

  13. Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.

    PubMed

    Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E

    2017-12-11

    Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.

  14. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  15. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  16. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  17. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  18. MultiPhyl: a high-throughput phylogenomics webserver using distributed computing

    PubMed Central

    Keane, Thomas M.; Naughton, Thomas J.; McInerney, James O.

    2007-01-01

    With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php. PMID:17553837

  19. Fitting of dynamic recurrent neural network models to sensory stimulus-response data.

    PubMed

    Doruk, R Ozgur; Zhang, Kechen

    2018-06-02

    We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time-dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) that have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using the maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allows us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data are generated by a phased cosine Fourier series having a fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size, and sample size are applied in order to examine the effect of the stimulus to the identification process. Results are presented in tabular and graphical forms at the end of this text. In addition, to demonstrate the success of this research, a study involving the same model, nominal parameters and stimulus structure, and another study that works on different models are compared to that of this research.

  20. Visualizing Big Data Outliers through Distributed Aggregation.

    PubMed

    Wilkinson, Leland

    2017-08-29

    Visualizing outliers in massive datasets requires statistical pre-processing in order to reduce the scale of the problem to a size amenable to rendering systems like D3, Plotly or analytic systems like R or SAS. This paper presents a new algorithm, called hdoutliers, for detecting multidimensional outliers. It is unique for a) dealing with a mixture of categorical and continuous variables, b) dealing with big-p (many columns of data), c) dealing with big-n (many rows of data), d) dealing with outliers that mask other outliers, and e) dealing consistently with unidimensional and multidimensional datasets. Unlike ad hoc methods found in many machine learning papers, hdoutliers is based on a distributional model that allows outliers to be tagged with a probability. This critical feature reduces the likelihood of false discoveries.

  1. Double-row vs single-row rotator cuff repair: a review of the biomechanical evidence.

    PubMed

    Wall, Lindley B; Keener, Jay D; Brophy, Robert H

    2009-01-01

    A review of the current literature will show a difference between the biomechanical properties of double-row and single-row rotator cuff repairs. Rotator cuff tears commonly necessitate surgical repair; however, the optimal technique for repair continues to be investigated. Recently, double-row repairs have been considered an alternative to single-row repair, allowing a greater coverage area for healing and a possibly stronger repair. We reviewed the literature of all biomechanical studies comparing double-row vs single-row repair techniques. Inclusion criteria included studies using cadaveric, animal, or human models that directly compared double-row vs single-row repair techniques, written in the English language, and published in peer reviewed journals. Identified articles were reviewed to provide a comprehensive conclusion of the biomechanical strength and integrity of the repair techniques. Fifteen studies were identified and reviewed. Nine studies showed a statistically significant advantage to a double-row repair with regards to biomechanical strength, failure, and gap formation. Three studies produced results that did not show any statistical advantage. Five studies that directly compared footprint reconstruction all demonstrated that the double-row repair was superior to a single-row repair in restoring anatomy. The current literature reveals that the biomechanical properties of a double-row rotator cuff repair are superior to a single-row repair. Basic Science Study, SRH = Single vs. Double Row RCR.

  2. Multiple-Hit Parameter Estimation in Monolithic Detectors

    PubMed Central

    Barrett, Harrison H.; Lewellen, Tom K.; Miyaoka, Robert S.

    2014-01-01

    We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%–12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied. PMID:23193231

  3. Proportion estimation using prior cluster purities

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    The prior distribution of CLASSY component purities is studied, and this information incorporated into maximum likelihood crop proportion estimators. The method is tested on Transition Year spring small grain segments.

  4. Glutamate receptor-channel gating. Maximum likelihood analysis of gigaohm seal recordings from locust muscle.

    PubMed Central

    Bates, S E; Sansom, M S; Ball, F G; Ramsey, R L; Usherwood, P N

    1990-01-01

    Gigaohm recordings have been made from glutamate receptor channels in excised, outside-out patches of collagenase-treated locust muscle membrane. The channels in the excised patches exhibit the kinetic state switching first seen in megaohm recordings from intact muscle fibers. Analysis of channel dwell time distributions reveals that the gating mechanism contains at least four open states and at least four closed states. Dwell time autocorrelation function analysis shows that there are at least three gateways linking the open states of the channel with the closed states. A maximum likelihood procedure has been used to fit six different gating models to the single channel data. Of these models, a cooperative model yields the best fit, and accurately predicts most features of the observed channel gating kinetics. PMID:1696510

  5. Approximated mutual information training for speech recognition using myoelectric signals.

    PubMed

    Guo, Hua J; Chan, A D C

    2006-01-01

    A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.

  6. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  7. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  8. Systems identification using a modified Newton-Raphson method: A FORTRAN program

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Iliff, K. W.

    1972-01-01

    A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.

  9. A matrix-based method of moments for fitting the multivariate random effects model for meta-analysis and meta-regression

    PubMed Central

    Jackson, Dan; White, Ian R; Riley, Richard D

    2013-01-01

    Multivariate meta-analysis is becoming more commonly used. Methods for fitting the multivariate random effects model include maximum likelihood, restricted maximum likelihood, Bayesian estimation and multivariate generalisations of the standard univariate method of moments. Here, we provide a new multivariate method of moments for estimating the between-study covariance matrix with the properties that (1) it allows for either complete or incomplete outcomes and (2) it allows for covariates through meta-regression. Further, for complete data, it is invariant to linear transformations. Our method reduces to the usual univariate method of moments, proposed by DerSimonian and Laird, in a single dimension. We illustrate our method and compare it with some of the alternatives using a simulation study and a real example. PMID:23401213

  10. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  11. Estimation of longitudinal stability and control derivatives for an icing research aircraft from flight data

    NASA Technical Reports Server (NTRS)

    Batterson, James G.; Omara, Thomas M.

    1989-01-01

    The results of applying a modified stepwise regression algorithm and a maximum likelihood algorithm to flight data from a twin-engine commuter-class icing research aircraft are presented. The results are in the form of body-axis stability and control derivatives related to the short-period, longitudinal motion of the aircraft. Data were analyzed for the baseline (uniced) and for the airplane with an artificial glaze ice shape attached to the leading edge of the horizontal tail. The results are discussed as to the accuracy of the derivative estimates and the difference between the derivative values found for the baseline and the iced airplane. Additional comparisons were made between the maximum likelihood results and the modified stepwise regression results with causes for any discrepancies postulated.

  12. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  13. Formation flying benefits based on vortex lattice calculations

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1977-01-01

    A quadrilateral vortex-lattice method was applied to a formation of three wings to calculate force and moment data for use in estimating potential benefits of flying aircraft in formation on extended range missions, and of anticipating the control problems which may exist. The investigation led to two types of formation having virtually the same overall benefits for the formation as a whole, i.e., a V or echelon formation and a double row formation (with two staggered rows of aircraft). These formations have unequal savings on aircraft within the formation, but this allows large longitudinal spacings between aircraft which is preferable to the small spacing required in formations having equal benefits for all aircraft. A reasonable trade-off between a practical formation size and range benefit seems to lie at about three to five aircraft with corresponding maximum potential range increases of about 46 percent to 67 percent. At this time it is not known what fraction of this potential range increase is achievable in practice.

  14. Biomechanical comparison of single-row arthroscopic rotator cuff repair technique versus transosseous repair technique.

    PubMed

    Tocci, Stephen L; Tashjian, Robert Z; Leventhal, Evan; Spenciner, David B; Green, Andrew; Fleming, Braden C

    2008-01-01

    This study determined the effect of tear size on gap formation of single-row simple-suture arthroscopic rotator cuff repair (ARCR) vs transosseous Mason-Allen suture open RCR (ORCR) in 13 pairs of human cadaveric shoulders. A massive tear was created in 6 pairs and a large tear in 7. Repairs were cyclically tested in low-load and high-load conditions, with no significant difference in gap formation. Under low-load, gapping was greater in massive tears. Under high-load, there was a trend toward increased gap with ARCR for large tears. All repairs of massive tears failed in high-load. Gapping was greater posteriorly in massive tears for both techniques. Gap formation of a modeled RCR depends upon the tear size. ARCR of larger tears may have higher failure rates than ORCR, and the posterior aspect appears to be the site of maximum gapping. Specific attention should be directed toward maximizing initial fixation of larger rotator cuff tears, especially at the posterior aspect.

  15. Spanish version of the Thought-Action Fusion Questionnaire and its application in eating disorders.

    PubMed

    Jáuregui-Lobera, I; Santed-Germán, Ma; Bolaños-Ríos, P; Garrido-Casals, O

    2013-01-01

    The aims of the study were to analyze the psychometric properties of the Spanish version of the Thought-Action Fusion Questionnaire (TAF-SP), as well as to determine its validity by evaluating the relationship of the TAF-SP to different instruments. TWO GROUPS WERE STUDIED: one comprising 146 patients with eating disorders; and another a group of 200 students. THREE FACTORS WERE OBTAINED: TAF-Moral; TAF-Likelihood-others; and TAF-Likelihood-oneself. The internal consistency of the TAF-SP was determined by means of Cronbach's α coefficient, with values ranging between 0.84-0.95. The correlations with other instruments refected adequate validity. The three-factor structure was tested by means of a linear structural equation model, and the structure fit satisfactorily. Differences in TAF-SP scores between the diagnostic subgroups were also analyzed. The TAF-SP meets the psychometric requirements for measuring thought-action fusion and shows adequate internal consistency and validity.

  16. A Framework for Modeling Emerging Diseases to Inform Management

    PubMed Central

    Katz, Rachel A.; Richgels, Katherine L.D.; Walsh, Daniel P.; Grant, Evan H.C.

    2017-01-01

    The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge. PMID:27983501

  17. A Framework for Modeling Emerging Diseases to Inform Management.

    PubMed

    Russell, Robin E; Katz, Rachel A; Richgels, Katherine L D; Walsh, Daniel P; Grant, Evan H C

    2017-01-01

    The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.

  18. A framework for modeling emerging diseases to inform management

    USGS Publications Warehouse

    Russell, Robin E.; Katz, Rachel A.; Richgels, Katherine L. D.; Walsh, Daniel P.; Grant, Evan H. Campbell

    2017-01-01

    The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.

  19. Single-row versus double-row rotator cuff repair: techniques and outcomes.

    PubMed

    Dines, Joshua S; Bedi, Asheesh; ElAttrache, Neal S; Dines, David M

    2010-02-01

    Double-row rotator cuff repair techniques incorporate a medial and lateral row of suture anchors in the repair configuration. Biomechanical studies of double-row repair have shown increased load to failure, improved contact areas and pressures, and decreased gap formation at the healing enthesis, findings that have provided impetus for clinical studies comparing single-row with double-row repair. Clinical studies, however, have not yet demonstrated a substantial improvement over single-row repair with regard to either the degree of structural healing or functional outcomes. Although double-row repair may provide an improved mechanical environment for the healing enthesis, several confounding variables have complicated attempts to establish a definitive relationship with improved rates of healing. Appropriately powered rigorous level I studies that directly compare single-row with double-row techniques in matched tear patterns are necessary to further address these questions. These studies are needed to justify the potentially increased implant costs and surgical times associated with double-row rotator cuff repair.

  20. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and

  1. Living with COPD: Nutrition

    MedlinePlus

    ... node == 'event_item') { if (allEvents == 'true') { html = ' ' + row.title + ' | ' + row.date + ' '; } else if (sublist.indexOf(row.cat) > -1 ){ html = ' ' + row.title + ' | ' + row.date + ' '; } else {return;} } else if (list_node == ' ...

  2. Nutrition for Lung Cancer

    MedlinePlus

    ... node == 'event_item') { if (allEvents == 'true') { html = ' ' + row.title + ' | ' + row.date + ' '; } else if (sublist.indexOf(row.cat) > -1 ){ html = ' ' + row.title + ' | ' + row.date + ' '; } else {return;} } else if (list_node == ' ...

  3. Women and Tobacco Use

    MedlinePlus

    ... node == 'event_item') { if (allEvents == 'true') { html = ' ' + row.title + ' | ' + row.date + ' '; } else if (sublist.indexOf(row.cat) > -1 ){ html = ' ' + row.title + ' | ' + row.date + ' '; } else {return;} } else if (list_node == ' ...

  4. Applications of non-standard maximum likelihood techniques in energy and resource economics

    NASA Astrophysics Data System (ADS)

    Moeltner, Klaus

    Two important types of non-standard maximum likelihood techniques, Simulated Maximum Likelihood (SML) and Pseudo-Maximum Likelihood (PML), have only recently found consideration in the applied economic literature. The objective of this thesis is to demonstrate how these methods can be successfully employed in the analysis of energy and resource models. Chapter I focuses on SML. It constitutes the first application of this technique in the field of energy economics. The framework is as follows: Surveys on the cost of power outages to commercial and industrial customers usually capture multiple observations on the dependent variable for a given firm. The resulting pooled data set is censored and exhibits cross-sectional heterogeneity. We propose a model that addresses these issues by allowing regression coefficients to vary randomly across respondents and by using the Geweke-Hajivassiliou-Keane simulator and Halton sequences to estimate high-order cumulative distribution terms. This adjustment requires the use of SML in the estimation process. Our framework allows for a more comprehensive analysis of outage costs than existing models, which rely on the assumptions of parameter constancy and cross-sectional homogeneity. Our results strongly reject both of these restrictions. The central topic of the second Chapter is the use of PML, a robust estimation technique, in count data analysis of visitor demand for a system of recreation sites. PML has been popular with researchers in this context, since it guards against many types of mis-specification errors. We demonstrate, however, that estimation results will generally be biased even if derived through PML if the recreation model is based on aggregate, or zonal data. To countervail this problem, we propose a zonal model of recreation that captures some of the underlying heterogeneity of individual visitors by incorporating distributional information on per-capita income into the aggregate demand function. This adjustment eliminates the unrealistic constraint of constant income across zonal residents, and thus reduces the risk of aggregation bias in estimated macro-parameters. The corrected aggregate specification reinstates the applicability of PML. It also increases model efficiency, and allows-for the generation of welfare estimates for population subgroups.

  5. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  6. Use pattern and predictors of use of highly caffeinated energy drinks among South Korean adolescents: a study using the Health Belief Model

    PubMed Central

    Ha, Dongmun; Song, Inmyung; Jang, Gyeongil; Lee, Eui-Kyung; Shin, Ju-Young

    2017-01-01

    Objectives Concerns about the use of highly caffeinated energy drinks among Korean adolescents remains. We compared adolescents’ perceptions regarding the use of drinks to their behaviours and factors. Design A structured questionnaire based on the Health Belief Model was administered to 850 freshmen and sophomores at three high schools in Bucheon, South Korea. Benefits were defined as beneficial effects from the use of highly caffeinated energy drinks (eg, awakening from sleepiness) and harms as adverse effects of the drinks (eg, cardiac palpitation). Likelihood of action represents the likelihood of taking actions that are perceived to be more beneficial after comparison of the benefits and harms of caffeine use. Descriptive analysis was used to quantify the relationship between their beliefs about highly caffeinated energy drinks and their use. We conducted hierarchical logistic regression to compute ORs and 95% CIs for: (1) demographic factors, (2) health threat, (3) likelihood of action and (4) cues to act. Results Altogether, 833 students responded to the questionnaire (effective response rate=98.0%). About 63.0% reported use of highly caffeinated energy drinks and 35.2% had used them as needed and habitually. The more susceptible the respondents perceived themselves to be to the risk of using these drinks, the less likely they were to use them (OR: 0.73, 95% CI 0.50 to 1.06). The more severe the perception of a health threat, the less that perception was associated with use (OR: 0.44, 95% CI 0.29 to 0.67). Likelihood of action was the strongest predictor of use, explaining 12.5% in use. Benefits and harms (OR: 4.43, 95% CI 2.77 to 7.09; OR: 1.86, 95% CI 1.16 to 2.99) also were significant predictors. Conclusions Enhancing adolescents’ perceptions of benefits and harms regarding using highly caffeinated energy drinks could be an effective way to influence the use of these drinks. PMID:28947455

  7. The kinetics of rugby union scrummaging.

    PubMed

    Milburn, P D

    1990-01-01

    Two rugby union forward packs of differing ability levels were examined during scrummaging against an instrumented scrum machine. By systematically moving the front-row of the scrum along the scrum machine, kinetic data on each front-row forward could be obtained under all test conditions. Each forward pack was tested under the following scrummaging combinations: front-row only; front-row plus second-row; full scrum minus side-row, and full scrum. Data obtained from each scrum included the three orthogonal components of force at engagement and the sustained force applied by each front-row player. An estimate of sub-unit contributions was made by subtracting the total forward force on all three front-row players from the total for the complete scrum. Results indicated the primary role of the second-row appeared to be application of forward force. The back-row ('number eight') forward did not substantially contribute any additional forward force, and added only slightly to the lateral and vertical shear force experienced by the front-row. The side-row contributed an additional 20-27% to the forward force, but at the expense of increased vertical forces on all front-row forwards. Results of this investigation are discussed in relation to rule modification, rule interpretation and coaching.

  8. Disparities in the Impact of Air Pollution

    MedlinePlus

    ... node == 'event_item') { if (allEvents == 'true') { html = ' ' + row.title + ' | ' + row.date + ' '; } else if (sublist.indexOf(row.cat) > -1 ){ html = ' ' + row.title + ' | ' + row.date + ' '; } else {return;} } else if (list_node == ' ...

  9. Tobacco Use in Racial and Ethnic Populations

    MedlinePlus

    ... node == 'event_item') { if (allEvents == 'true') { html = ' ' + row.title + ' | ' + row.date + ' '; } else if (sublist.indexOf(row.cat) > -1 ){ html = ' ' + row.title + ' | ' + row.date + ' '; } else {return;} } else if (list_node == ' ...

  10. Bayes multiple decision functions.

    PubMed

    Wu, Wensong; Peña, Edsel A

    2013-01-01

    This paper deals with the problem of simultaneously making many ( M ) binary decisions based on one realization of a random data matrix X . M is typically large and X will usually have M rows associated with each of the M decisions to make, but for each row the data may be low dimensional. Such problems arise in many practical areas such as the biological and medical sciences, where the available dataset is from microarrays or other high-throughput technology and with the goal being to decide which among of many genes are relevant with respect to some phenotype of interest; in the engineering and reliability sciences; in astronomy; in education; and in business. A Bayesian decision-theoretic approach to this problem is implemented with the overall loss function being a cost-weighted linear combination of Type I and Type II loss functions. The class of loss functions considered allows for use of the false discovery rate (FDR), false nondiscovery rate (FNR), and missed discovery rate (MDR) in assessing the quality of decision. Through this Bayesian paradigm, the Bayes multiple decision function (BMDF) is derived and an efficient algorithm to obtain the optimal Bayes action is described. In contrast to many works in the literature where the rows of the matrix X are assumed to be stochastically independent, we allow a dependent data structure with the associations obtained through a class of frailty-induced Archimedean copulas. In particular, non-Gaussian dependent data structure, which is typical with failure-time data, can be entertained. The numerical implementation of the determination of the Bayes optimal action is facilitated through sequential Monte Carlo techniques. The theory developed could also be extended to the problem of multiple hypotheses testing, multiple classification and prediction, and high-dimensional variable selection. The proposed procedure is illustrated for the simple versus simple hypotheses setting and for the composite hypotheses setting through simulation studies. The procedure is also applied to a subset of a microarray data set from a colon cancer study.

  11. Interpersonal Coordination and Individual Organization Combined with Shared Phenomenological Experience in Rowing Performance: Two Case Studies

    PubMed Central

    Seifert, Ludovic; Lardy, Julien; Bourbousson, Jérôme; Adé, David; Nordez, Antoine; Thouvarecq, Régis; Saury, Jacques

    2017-01-01

    The principal aim of this study was to examine the impact of variability in interpersonal coordination and individual organization on rowing performance. The second aim was to analyze crew phenomenology in order to understand how rowers experience their joint actions when coping with constraints emerging from the race. We conducted a descriptive and exploratory study of two coxless pair crews during a 3000-m rowing race against the clock. As the investigation was performed in an ecological context, we postulated that our understanding of the behavioral dynamics of interpersonal coordination and individual organization and the variability in performance would be enriched through the analysis of crew phenomenology. The behavioral dynamics of individual organization were assessed at kinematic and kinetic levels, and interpersonal coordination was examined by computing the relative phase between oar angles and oar forces and the difference in the oar force impulse of the two rowers. The inter-cycle variability of the behavioral dynamics of one international and one national crew was evaluated by computing the root mean square and the Cauchy index. Inter-cycle variability was considered significantly high when the behavioral and performance data for each cycle were outside of the confidence interval. Crew phenomenology was characterized on the basis of self-confrontation interviews and the rowers' concerns were then analyzed according to course-of-action methodology to identify the shared experiences. Our findings showed that greater behavioral variability could be either “perturbing” or “functional” depending on its impact on performance (boat velocity); the rowers experienced it as sometimes meaningful and sometimes meaningless; and their experiences were similar or diverging. By combining phenomenological and behavioral data, we explain how constraints not manipulated by an experimenter but emerging from the ecological context of a race can be associated with functional adaptations or perturbations of the interpersonal coordination. PMID:28194127

  12. Associations between poor sleep quality and stages of change of multiple health behaviors among participants of employee wellness program

    PubMed Central

    Hui, Siu-kuen Azor; Grandner, Michael A.

    2015-01-01

    Objective Using the Transtheoretical Model of behavioral change, this study evaluates the relationship between sleep quality and the motivation and maintenance processes of healthy behavior change. Methods The current study is an analysis of data collected in 2008 from an online health risk assessment (HRA) survey completed by participants of the Kansas State employee wellness program (N = 13,322). Using multinomial logistic regression, associations between self-reported sleep quality and stages of change (i.e. precontemplation, contemplation, preparation, action, maintenance) in five health behaviors (stress management, weight management, physical activities, alcohol use, and smoking) were analyzed. Results Adjusted for covariates, poor sleep quality was associated with an increased likelihood of contemplation, preparation, and in some cases action stage when engaging in the health behavior change process, but generally a lower likelihood of maintenance of the healthy behavior. Conclusions The present study demonstrated that poor sleep quality was associated with an elevated likelihood of contemplating or initiating behavior change, but a decreased likelihood of maintaining healthy behavior change. It is important to include sleep improvement as one of the lifestyle management interventions offered in EWP to comprehensively reduce health risks and promote the health of a large employee population. PMID:26046013

  13. Interim Scientific Report: AFOSR-81-0122.

    DTIC Science & Technology

    1983-05-05

    Maximum likelihood. 2 Periton Lane, Mine-head, TA24 8AQ , England .... ...• .r- . ’ ’ "fl’ ’ ’ " .. ...... ’ ’"’ ’ - ’: , t i .a....,: Attachment 5

  14. Does double-row rotator cuff repair improve functional outcome of patients compared with single-row technique? A systematic review.

    PubMed

    DeHaan, Alexander M; Axelrad, Thomas W; Kaye, Elizabeth; Silvestri, Lorenzo; Puskas, Brian; Foster, Timothy E

    2012-05-01

    The advantage of single-row versus double-row arthroscopic rotator cuff repair techniques has been a controversial issue in sports medicine and shoulder surgery. There is biomechanical evidence that double-row techniques are superior to single-row techniques; however, there is no clinical evidence that the double-row technique provides an improved functional outcome. When compared with single-row rotator cuff repair, double-row fixation, although biomechanically superior, has no clinical benefit with respect to retear rate or improved functional outcome. Systematic review. The authors reviewed prospective studies of level I or II clinical evidence that compared the efficacy of single- and double-row rotator cuff repairs. Functional outcome scores included the American Shoulder and Elbow Surgeons (ASES) shoulder scale, the Constant shoulder score, and the University of California, Los Angeles (UCLA) shoulder rating scale. Radiographic failures and complications were also analyzed. A test of heterogeneity for patient demographics was also performed to determine if there were differences in the patient profiles across the included studies. Seven studies fulfilled our inclusion criteria. The test of heterogeneity across these studies showed no differences. The functional ASES, Constant, and UCLA outcome scores revealed no difference between single- and double-row rotator cuff repairs. The total retear rate, which included both complete and partial retears, was 43.1% for the single-row repair and 27.2% for the double-row repair (P = .057), representing a trend toward higher failures in the single-row group. Through a comprehensive literature search and meta-analysis of current arthroscopic rotator cuff repairs, we found that the single-row repairs did not differ from the double-row repairs in functional outcome scores. The double-row repairs revealed a trend toward a lower radiographic proven retear rate, although the data did not reach statistical significance. There may be a concerning trend toward higher retear rates in patients undergoing a single-row repair, but further studies are required.

  15. Optimizing Unmanned Aircraft System Scheduling

    DTIC Science & Technology

    2008-06-01

    COL_MISSION_NAME)) If Trim( CStr (rMissions(iRow, COL_MISSION_REQUIRED))) <> "" Then If CLng(rMissions(iRow, COL_MISSION_REQUIRED)) > CLng...logFN, "s:" & CStr (s) & " " For iRow = 1 To top Print #logFN, stack(iRow) & "," Next iRow Print #logFN...340" 60 Print #logFN, "m:" & CStr (s) & " " For iRow = 1 To top Print #logFN, lMissionPeriod(iRow

  16. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  17. Integrated Efforts for Analysis of Geophysical Measurements and Models.

    DTIC Science & Technology

    1997-09-26

    12b. DISTRIBUTION CODE 13. ABSTRACT ( Maximum 200 words) This contract supported investigations of integrated applications of physics, ephemerides...REGIONS AND GPS DATA VALIDATIONS 20 2.5 PL-SCINDA: VISUALIZATION AND ANALYSIS TECHNIQUES 22 2.5.1 View Controls 23 2.5.2 Map Selection...and IR data, about cloudy pixels. Clustering and maximum likelihood classification algorithms categorize up to four cloud layers into stratiform or

  18. Double-Row Capsulolabral Repair Increases Load to Failure and Decreases Excessive Motion.

    PubMed

    McDonald, Lucas S; Thompson, Matthew; Altchek, David W; McGarry, Michelle H; Lee, Thay Q; Rocchi, Vanna J; Dines, Joshua S

    2016-11-01

    Using a cadaver shoulder instability model and load-testing device, we compared biomechanical characteristics of double-row and single-row capsulolabral repairs. We hypothesized a greater reduction in glenohumeral motion and translation and a higher load to failure in a mattress double-row capsulolabral repair than in a single-row repair. In 6 matched pairs of cadaveric shoulders, a capsulolabral injury was created. One shoulder was repaired with a single-row technique, and the other with a double-row mattress technique. Rotational range of motion, anterior-inferior translation, and humeral head kinematics were measured. Load-to-failure testing measured stiffness, yield load, deformation at yield load, energy absorbed at yield load, load to failure, deformation at ultimate load, and energy absorbed at ultimate load. Double-row repair significantly decreased external rotation and total range of motion compared with single-row repair. Both repairs decreased anterior-inferior translation compared with the capsulolabral-injured condition, however, no differences existed between repair types. Yield load in the single-row group was 171.3 ± 110.1 N, and in the double-row group it was 216.1 ± 83.1 N (P = .02). Ultimate load to failure in the single-row group was 224.5 ± 121.0 N, and in the double-row group it was 373.9 ± 172.0 N (P = .05). Energy absorbed at ultimate load in the single-row group was 1,745.4 ± 1,462.9 N-mm, and in the double-row group it was 4,649.8 ± 1,930.8 N-mm (P = .02). In cases of capsulolabral disruption, double-row repair techniques may result in decreased shoulder rotational range of motion and improved load-to-failure characteristics. In cases of capsulolabral disruption, repair techniques with double-row mattress repair may provide more secure fixation. Double-row capsulolabral repair decreases shoulder motion and increases load to failure, yield load, and energy absorbed at yield load more than single-row repair. Published by Elsevier Inc.

  19. Single-row versus double-row capsulolabral repair: a comparative evaluation of contact pressure and surface area in the capsulolabral complex-glenoid bone interface.

    PubMed

    Kim, Doo-Sup; Yoon, Yeo-Seung; Chung, Hoi-Jeong

    2011-07-01

    Despite the attention that has been paid to restoration of the capsulolabral complex anatomic insertion onto the glenoid, studies comparing the pressurized contact area and mean interface pressure at the anatomic insertion site between a single-row repair and a double-row labral repair have been uncommon. The purpose of our study was to compare the mean interface pressure and pressurized contact area at the anatomic insertion site of the capsulolabral complex between a single-row repair and a double-row repair technique. Controlled laboratory study. Thirty fresh-frozen cadaveric shoulders (mean age, 61 ± 8 years; range, 48-71 years) were used for this study. Two types of repair were performed on each specimen: (1) a single-row repair and (2) a double-row repair. Using pressure-sensitive films, we examined the interface contact area and contact pressure. The mean interface pressure was greater for the double-row repair technique (0.29 ± 0.04 MPa) when compared with the single-row repair technique (0.21 ± 0.03 MPa) (P = .003). The mean pressurized contact area was also significantly greater for the double-row repair technique (211.8 ± 18.6 mm(2), 78.4% footprint) compared with the single-row repair technique (106.4 ± 16.8 mm(2), 39.4% footprint) (P = .001). The double-row repair has significantly greater mean interface pressure and pressurized contact area at the insertion site of the capsulolabral complex than the single-row repair. The double-row repair may be advantageous compared with the single-row repair in restoring the native footprint area of the capsulolabral complex.

  20. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  1. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  2. Maximum likelihood: Extracting unbiased information from complex networks

    NASA Astrophysics Data System (ADS)

    Garlaschelli, Diego; Loffredo, Maria I.

    2008-07-01

    The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.

  3. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    PubMed

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  4. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  5. Parallel implementation of D-Phylo algorithm for maximum likelihood clusters.

    PubMed

    Malik, Shamita; Sharma, Dolly; Khatri, Sunil Kumar

    2017-03-01

    This study explains a newly developed parallel algorithm for phylogenetic analysis of DNA sequences. The newly designed D-Phylo is a more advanced algorithm for phylogenetic analysis using maximum likelihood approach. The D-Phylo while misusing the seeking capacity of k -means keeps away from its real constraint of getting stuck at privately conserved motifs. The authors have tested the behaviour of D-Phylo on Amazon Linux Amazon Machine Image(Hardware Virtual Machine)i2.4xlarge, six central processing unit, 122 GiB memory, 8  ×  800 Solid-state drive Elastic Block Store volume, high network performance up to 15 processors for several real-life datasets. Distributing the clusters evenly on all the processors provides us the capacity to accomplish a near direct speed if there should arise an occurrence of huge number of processors.

  6. Image classification at low light levels

    NASA Astrophysics Data System (ADS)

    Wernick, Miles N.; Morris, G. Michael

    1986-12-01

    An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.

  7. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  8. The effect of high leverage points on the logistic ridge regression estimator having multicollinearity

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah

    2014-06-01

    This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.

  9. A real-time signal combining system for Ka-band feed arrays using maximum-likelihood weight estimates

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1990-01-01

    A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.

  10. Fast automated analysis of strong gravitational lenses with convolutional neural networks.

    PubMed

    Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J

    2017-08-30

    Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  11. Modeling the distribution of extreme share return in Malaysia using Generalized Extreme Value (GEV) distribution

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya

    2012-05-01

    Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.

  12. Toward a New Diversity: Guidelines for a Staff Diversity/Affirmative Action Plan.

    ERIC Educational Resources Information Center

    California Community Colleges, Sacramento. Office of the Chancellor.

    These guidelines for California's community colleges specify required elements of a staff diversity/affirmative action plan, recommend sound practices and activities that will maximize the likelihood of success, and provide information on, and required elements of, related issues such as sexual harassment, handicap discrimination, and AIDS in the…

  13. Promoting Cancer Screening among Churchgoing Latinas: "Fe en Acción"/Faith in Action

    ERIC Educational Resources Information Center

    Elder, J. P.; Haughton, J.; Perez, L. G.; Martínez, M. E.; De la Torre, C. L.; Slymen, D. J.; Arredondo, E. M.

    2017-01-01

    Cancer screening rates among Latinas are generally low, reducing the likelihood of early cancer detection in this population. This article examines the effects of a community intervention ("Fe en Acción"/Faith in Action) led by community health workers ("promotoras") on promoting breast, cervical and colorectal cancer screening…

  14. 78 FR 40669 - Endangered and Threatened Wildlife and Plants; Endangered Species Status for Cape Sable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-08

    ... Wildlife Service, Interior. ACTION: Proposed rule; reopening of comment period; availability of draft... final decisions on these actions. ADDRESSES: Document availability: You may obtain copies of the October... likelihood of adverse social reactions to the designation of critical habitat, as discussed in the DEA, and...

  15. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    PubMed

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  16. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    PubMed

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Cyclic loading of rotator cuff reconstructions: single-row repair with modified suture configurations versus double-row repair.

    PubMed

    Lorbach, Olaf; Bachelier, Felix; Vees, Jochen; Kohn, Dieter; Pape, Dietrich

    2008-08-01

    Double-row repair is suggested to have superior biomechanical properties in rotator cuff reconstruction compared with single-row repair. However, double-row rotator cuff repair is frequently compared with simple suture repair and not with modified suture configurations. Single-row rotator cuff repairs with modified suture configurations have similar failure loads and gap formations as double-row reconstructions. Controlled laboratory study. We created 1 x 2-cm defects in 48 porcine infraspinatus tendons. Reconstructions were then performed with 4 single-row repairs and 2 double-row repairs. The single-row repairs included transosseous simple sutures; double-loaded corkscrew anchors in either a double mattress or modified Mason-Allen suture repair; and the Magnum Knotless Fixation Implant with an inclined mattress. Double-row repairs were either with Bio-Corkscrew FT using modified Mason-Allen stitches or a combination of Bio-Corkscrew FT and PushLock anchors using the SutureBridge Technique. During cyclic load (10 N to 60-200 N), gap formation was measured, and finally, ultimate load to failure and type of failure were recorded. Double-row double-corkscrew anchor fixation had the highest ultimate tensile strength (398 +/- 98 N) compared to simple sutures (105 +/- 21 N; P < .0001), single-row corkscrews using a modified Mason-Allen stitch (256 +/- 73 N; P = .003) or double mattress repair (290 +/- 56 N; P = .043), the Magnum Implant (163 +/- 13 N; P < .0001), and double-row repair with PushLock and Bio-Corkscrew FT anchors (163 +/- 59 N; P < .0001). Single-row double mattress repair was superior to transosseous sutures (P < .0001), the Magnum Implant (P = .009), and double-row repair with PushLock and Bio-Corkscrew FT anchors (P = .009). Lowest gap formation was found for double-row double-corkscrew repair (3.1 +/- 0.1 mm) compared to simple sutures (8.7 +/- 0.2 mm; P < .0001), the Magnum Implant (6.2 +/- 2.2 mm; P = .002), double-row repair with PushLock and Bio-Corkscrew FT anchors (5.9 +/- 0.9 mm; P = .008), and corkscrews with modified Mason-Allen sutures (6.4 +/- 1.3 mm; P = .001). Double-row double-corkscrew anchor rotator cuff repair offered the highest failure load and smallest gap formation and provided the most secure fixation of all tested configurations. Double-loaded suture anchors using modified suture configurations achieved superior results in failure load and gap formation compared to simple suture repair and showed similar loads and gap formation with double-row repair using PushLock and Bio-Corkscrew FT anchors. Single-row repair with modified suture configurations may lead to results comparable to several double-row fixations. If double-row repair is used, modified stitches might further minimize gap formation and increase failure load.

  18. Gnathostoma infection in fish caught for local consumption in Nakhon Nayok Province, Thailand. II. Deasonal variation in swamp eels.

    PubMed

    Rojekittikhun, Wichit; Chaiyasith, Tossapon; Butraporn, Piyarat

    2004-12-01

    From August 2000 to August 2001, 1844 swamp eels (Monopterus albus) were purchased from several local markets in Nakhon Nayok Province, Thailand, and examined for the presence of Gnathostoma advanced third-stage larvae. The overall prevalence was 30.1% and the mean number of larvae/eel (infection intensity) was 10.0. The highest infection rate (44.1%) was found in August 2000 and the lowest (10.7%) in March 2001. The greatest mean number of larvae/eel (75.1) was found in August 2000, whereas the fewest (2.3) was in July 2001. It is suggested that the prevalence and intensity of infection decreased within two months after the end of the rainy season and started to rise again about two months after the next rainy season began. A total of 5,532 Gnathostoma larvae were recovered from 555 infected eels, with a maximum number of 698 larvae/eel. The highest rates of Gnathostoma infection according to eel body length and weight were 87.5% in the group 91-100 cm, and 100% in groups of 901-1100 g, respectively. There were significant correlations between eel body lengths and infection rates, body lengths and infection intensities; eel body weights were also significantly correlated with infection rates and infection intensities. It was noted that the longer/ heavier the eels were, the higher would be the infection rates and the greater the infection intensities. Tissue distributions of Gnathostoma larvae in the livers and muscles of swamp eels were as follows: 43.0% of the total number of larvae were found in the muscles and 57.0% were in the liver; 29.7, 51.7, and 18.6% were in the anterior, middle, and posterior parts, respectively; 35.1% were in the dorsal part, while 64.9% were in the ventral part; 9.0, 18.7, 7.4, 20.6, 33.1, and 11.2% were in the anterodorsal, mediodorsal, posterodorsal, anteroventral, medioventral and posteroventral parts, respectively. Of the 5,532 Gnathostoma larvae examined, 1101 (19.9%) were found to possess morphological variants or abnormal cephalic hooklets. The most common unusual feature was that there were few to numerous extra rudimentary hooklets below row 4 and between the 4 rows of hooklets (7.6%), the presence of a fifth row of hooklets (3.5%), abnormal hooklets in any of the 4 rows of hooklets (5.2%), spiral arrangement of the 4 rows of hooklets (1.8%), and larvae having only 3 rows of hooklets (0.3%).

  19. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  20. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... barley and Six-rowed Blue Malting barley. 810.204 Section 810.204 Agriculture Regulations of the... requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley. Grade Minimum limits of— Test... and Six-rowed Blue Malting barley varieties not meeting the requirements of this section shall be...

  1. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... barley and Six-rowed Blue Malting barley. 810.204 Section 810.204 Agriculture Regulations of the... requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley. Grade Minimum limits of— Test... and Six-rowed Blue Malting barley varieties not meeting the requirements of this section shall be...

  2. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... barley and Six-rowed Blue Malting barley. 810.204 Section 810.204 Agriculture Regulations of the... requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley. Grade Minimum limits of— Test... and Six-rowed Blue Malting barley varieties not meeting the requirements of this section shall be...

  3. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... barley and Six-rowed Blue Malting barley. 810.204 Section 810.204 Agriculture Regulations of the... requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley. Grade Minimum limits of— Test... and Six-rowed Blue Malting barley varieties not meeting the requirements of this section shall be...

  4. Evaluating Fast Maximum Likelihood-Based Phylogenetic Programs Using Empirical Phylogenomic Data Sets

    PubMed Central

    Zhou, Xiaofan; Shen, Xing-Xing; Hittinger, Chris Todd

    2018-01-01

    Abstract The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs’ relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses. PMID:29177474

  5. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  6. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  7. A biomechanical comparison of single and double-row fixation in arthroscopic rotator cuff repair.

    PubMed

    Smith, Christopher D; Alexander, Susan; Hill, Adam M; Huijsmans, Pol E; Bull, Anthony M J; Amis, Andrew A; De Beer, Joe F; Wallace, Andrew L

    2006-11-01

    The optimal method for arthroscopic rotator cuff repair is not yet known. The hypothesis of the present study was that a double-row repair would demonstrate superior static and cyclic mechanical behavior when compared with a single-row repair. The specific aims were to measure gap formation at the bone-tendon interface under static creep loading and the ultimate strength and mode of failure of both methods of repair under cyclic loading. A standardized tear of the supraspinatus tendon was created in sixteen fresh cadaveric shoulders. Arthroscopic rotator cuff repairs were performed with use of either a double-row technique (eight specimens) or a single-row technique (eight specimens) with nonabsorbable sutures that were double-loaded on a titanium suture anchor. The repairs were loaded statically for one hour, and the gap formation was measured. Cyclic loading to failure was then performed. Gap formation during static loading was significantly greater in the single-row group than in the double-row group (mean and standard deviation, 5.0 +/- 1.2 mm compared with 3.8 +/- 1.4 mm; p < 0.05). Under cyclic loading, the double-row repairs failed at a mean of 320 +/- 96.9 N whereas the single-row repairs failed at a mean of 224 +/- 147.9 N (p = 0.058). Three single-row repairs and three double-row repairs failed as a result of suture cut-through. Four single-row repairs and one double-row repair failed as a result of anchor or suture failure. The remaining five repairs did not fail, and a midsubstance tear of the tendon occurred. Although more technically demanding, the double-row technique demonstrates superior resistance to gap formation under static loading as compared with the single-row technique. A double-row reconstruction of the supraspinatus tendon insertion may provide a more reliable construct than a single-row repair and could be used as an alternative to open reconstruction for the treatment of isolated tears.

  8. Multicenter Evaluation Of Coronary Dual-Source CT angiography in patients with intermediate Risk of Coronary Artery Stenoses (MEDIC): study design and rationale.

    PubMed

    Marwan, Mohamed; Hausleiter, Jörg; Abbara, Suhny; Hoffmann, Udo; Becker, Christoph; Ovrehus, Kristian; Ropers, Dieter; Bathina, Ravi; Berman, Dan; Anders, Katharina; Uder, Michael; Meave, Aloha; Alexánderson, Erick; Achenbach, Stephan

    2014-01-01

    The diagnostic performance of multidetector row CT to detect coronary artery stenosis has been evaluated in numerous single-center studies, with only limited data from large cohorts with low-to-intermediate likelihood of coronary disease and in multicenter trials. The Multicenter Evaluation of Coronary Dual-Source CT Angiography in Patients with Intermediate Risk of Coronary Artery Stenoses (MEDIC) trial determines the accuracy of dual-source CT (DSCT) to identify persons with at least 1 coronary artery stenosis among patients with low-to-intermediate pretest likelihood of disease. The MEDIC trial was designed as a prospective, multicenter, international trial to evaluate the diagnostic performance of DSCT for the detection of coronary artery stenosis compared with invasive coronary angiography. The study includes 8 sites in Germany, India, Mexico, the United States, and Denmark. The study population comprises patients referred for a diagnostic coronary angiogram because of suspected coronary artery disease with an intermediate pretest likelihood as determined by sex, age, and symptoms. All evaluations are performed by blinded core laboratory readers. The primary outcome of the MEDIC trial is the accuracy of DSCT to identify the presence of coronary artery stenoses with a luminal diameter narrowing of 50% or more on a per-vessel basis. Secondary outcome parameters include per-patient and per-segment diagnostic accuracy for 50% stenoses and accuracy to identify stenoses of 70% or more. Furthermore, secondary outcome parameters include the influence of heart rate, Agatston score, body weight, body mass index, image quality, and diagnostic confidence on the accuracy to detect coronary artery stenoses >50% on a per-vessel basis. The results of the MEDIC trial will assess the clinical utility of coronary CT angiography in the evaluation of patients with intermediate pretest likelihood of coronary artery disease. Copyright © 2014 Society of Cardiovascular Computed Tomography. All rights reserved.

  9. Chromatic Modulator for a High-Resolution CCD or APS

    NASA Technical Reports Server (NTRS)

    Hartley, Frank; Hull, Anthony

    2008-01-01

    A chromatic modulator has been proposed to enable the separate detection of the red, green, and blue (RGB) color components of the same scene by a single charge-coupled device (CCD), active-pixel sensor (APS), or similar electronic image detector. Traditionally, the RGB color-separation problem in an electronic camera has been solved by use of either (1) fixed color filters over three separate image detectors; (2) a filter wheel that repeatedly imposes a red, then a green, then a blue filter over a single image detector; or (3) different fixed color filters over adjacent pixels. The use of separate image detectors necessitates precise registration of the detectors and the use of complicated optics; filter wheels are expensive and add considerably to the bulk of the camera; and fixed pixelated color filters reduce spatial resolution and introduce color-aliasing effects. The proposed chromatic modulator would not exhibit any of these shortcomings. The proposed chromatic modulator would be an electromechanical device fabricated by micromachining. It would include a filter having a spatially periodic pattern of RGB strips at a pitch equal to that of the pixels of the image detector. The filter would be placed in front of the image detector, supported at its periphery by a spring suspension and electrostatic comb drive. The spring suspension would bias the filter toward a middle position in which each filter strip would be registered with a row of pixels of the image detector. Hard stops would limit the excursion of the spring suspension to precisely one pixel row above and one pixel row below the middle position. In operation, the electrostatic comb drive would be actuated to repeatedly snap the filter to the upper extreme, middle, and lower extreme positions. This action would repeatedly place a succession of the differently colored filter strips in front of each pixel of the image detector. To simplify the processing, it would be desirable to encode information on the color of the filter strip over each row (or at least over some representative rows) of pixels at a given instant of time in synchronism with the pixel output at that instant.

  10. Variations in hemostatic parameters after near-maximum exercise and specific tests in athletes.

    PubMed

    Cerneca, F; Crocetti, G; Gombacci, A; Simeone, R; Tamaro, G; Mangiarotti, M A

    1999-03-01

    The clotting state of the blood changes according to the type of physical exercise to which a group of healthy subjects are subjected. We studied the behaviour of the coagulation system before and after near-maximum, specific and standardized exercise tests in three groups of males practising sports defined as demanding in terms of cardiovascular output. The study was a comparative investigation between athletes and the group of controls composed of presumably healthy males. athletes training for competitions such as marathon, rowing and weightlifting. we tested 7 rowers using the rowing machine, 12 marathon runners using the treadmill, 7 weightlifters using their own exercise equipment, and 7 healthy subjects (controls) using the cycle ergometer. during the tests we monitored heart rates, maximal oxygen intake, anaerobic threshold, respiratory quotient, maximum ventilation, and lactic acid. The following coagulation tests were performed before and after near-maximum exercise: prothrombin time (PT), partial activated thromboplastin time (PTT), fibrinogen (FBG), antithrombin III (ATIII), protein C (PC), protein S (PS), prothrombin fragment 1 + 2 (F1 + 2), tissue activator of plasminogen (t-PA) and its inhibitor (PAI). The most significant results showed a low basal PC in the rowers which decreased further after near-maximum exercise; significantly higher basal activities of ATIII, PC and PS in the marathon runners compared to the rowers; a high proportion of weightlifters showed a reduction in t-PA after exercise and an increase of PAI; the controls were the only group in which fibrinolytic activity and all the circulating anticoagulants increased after near-maximum exercise. Thus subjects who practise aerobic sports differ principally in terms of variations in inhibitors (low PC in rowers and marathon runners, increased presence of inhibitors in controls). The weightlifters did not show any significant variations, and so the kind of exercise involved (training to increase resistance and maximum strength) and the recovery times between the exercises do not seem to trigger changes in coagulation/fibrinolysis. We can therefore confirm that only relatively prolonged effort can trigger a mechanism beneficial to the cardiovascular system. In conclusion, physical activity benefits the coagulation system particularly as regards fibrinolysis, but certain subjects may be at risk of thrombosis and these must be identified and followed. We suggest that fibrinolytic activity be studied in athletes who practise weightlifting and have a history of cardiovascular disease, and that inhibitors (protein C in particular) be studied in rowers with a family history of thromboembolism.

  11. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    PubMed Central

    Meyer, Karin; Kirkpatrick, Mark

    2005-01-01

    Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566

  12. Neandertal admixture in Eurasia confirmed by maximum-likelihood analysis of three genomes.

    PubMed

    Lohse, Konrad; Frantz, Laurent A F

    2014-04-01

    Although there has been much interest in estimating histories of divergence and admixture from genomic data, it has proved difficult to distinguish recent admixture from long-term structure in the ancestral population. Thus, recent genome-wide analyses based on summary statistics have sparked controversy about the possibility of interbreeding between Neandertals and modern humans in Eurasia. Here we derive the probability of full mutational configurations in nonrecombining sequence blocks under both admixture and ancestral structure scenarios. Dividing the genome into short blocks gives an efficient way to compute maximum-likelihood estimates of parameters. We apply this likelihood scheme to triplets of human and Neandertal genomes and compare the relative support for a model of admixture from Neandertals into Eurasian populations after their expansion out of Africa against a history of persistent structure in their common ancestral population in Africa. Our analysis allows us to conclusively reject a model of ancestral structure in Africa and instead reveals strong support for Neandertal admixture in Eurasia at a higher rate (3.4-7.3%) than suggested previously. Using analysis and simulations we show that our inference is more powerful than previous summary statistics and robust to realistic levels of recombination.

  13. Neandertal Admixture in Eurasia Confirmed by Maximum-Likelihood Analysis of Three Genomes

    PubMed Central

    Lohse, Konrad; Frantz, Laurent A. F.

    2014-01-01

    Although there has been much interest in estimating histories of divergence and admixture from genomic data, it has proved difficult to distinguish recent admixture from long-term structure in the ancestral population. Thus, recent genome-wide analyses based on summary statistics have sparked controversy about the possibility of interbreeding between Neandertals and modern humans in Eurasia. Here we derive the probability of full mutational configurations in nonrecombining sequence blocks under both admixture and ancestral structure scenarios. Dividing the genome into short blocks gives an efficient way to compute maximum-likelihood estimates of parameters. We apply this likelihood scheme to triplets of human and Neandertal genomes and compare the relative support for a model of admixture from Neandertals into Eurasian populations after their expansion out of Africa against a history of persistent structure in their common ancestral population in Africa. Our analysis allows us to conclusively reject a model of ancestral structure in Africa and instead reveals strong support for Neandertal admixture in Eurasia at a higher rate (3.4−7.3%) than suggested previously. Using analysis and simulations we show that our inference is more powerful than previous summary statistics and robust to realistic levels of recombination. PMID:24532731

  14. Estimating cellular parameters through optimization procedures: elementary principles and applications.

    PubMed

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  15. Phylogenetic evidence for cladogenetic polyploidization in land plants.

    PubMed

    Zhan, Shing H; Drori, Michal; Goldberg, Emma E; Otto, Sarah P; Mayrose, Itay

    2016-07-01

    Polyploidization is a common and recurring phenomenon in plants and is often thought to be a mechanism of "instant speciation". Whether polyploidization is associated with the formation of new species (cladogenesis) or simply occurs over time within a lineage (anagenesis), however, has never been assessed systematically. We tested this hypothesis using phylogenetic and karyotypic information from 235 plant genera (mostly angiosperms). We first constructed a large database of combined sequence and chromosome number data sets using an automated procedure. We then applied likelihood models (ClaSSE) that estimate the degree of synchronization between polyploidization and speciation events in maximum likelihood and Bayesian frameworks. Our maximum likelihood analysis indicated that 35 genera supported a model that includes cladogenetic transitions over a model with only anagenetic transitions, whereas three genera supported a model that incorporates anagenetic transitions over one with only cladogenetic transitions. Furthermore, the Bayesian analysis supported a preponderance of cladogenetic change in four genera but did not support a preponderance of anagenetic change in any genus. Overall, these phylogenetic analyses provide the first broad confirmation that polyploidization is temporally associated with speciation events, suggesting that it is indeed a major speciation mechanism in plants, at least in some genera. © 2016 Botanical Society of America.

  16. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  17. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    PubMed

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  18. From Affective Experience to Motivated Action: Tracking Reward-Seeking and Punishment-Avoidant Behaviour in Real-Life

    PubMed Central

    Wichers, Marieke; Kasanova, Zuzana; Bakker, Jindra; Thiery, Evert; Derom, Catherine; Jacobs, Nele; van Os, Jim

    2015-01-01

    Many of the decisions and actions in everyday life result from implicit learning processes. Important to psychopathology are, for example, implicit reward-seeking and punishment-avoidant learning processes. It is known that when specific actions get associated with a rewarding experience, such as positive emotions, that this will increase the likelihood that an organism will engage in similar actions in the future. Similarly, when actions get associated with punishing experiences, such as negative emotions, this may reduce the likelihood that the organism will engage in similar actions in the future. This study examines whether we can observe these implicit processes prospectively in the flow of daily life. If such processes take place then we expect that current behaviour can be predicted by how similar behaviour was experienced (in terms of positive and negative affect) at previous measurement moments. This was examined in a sample of 621 female individuals that had participated in an Experience Sampling data collection. Measures of affect and behaviour were collected at 10 semi-random moments of the day for 5 consecutive days. It was examined whether affective experience that was paired with certain behaviours (physical activity and social context) at previous measurements modified the likelihood to show similar behaviours at next measurement moments. Analyses were performed both at the level of observations (a time scale with units of ± 90 min) and at day level (a time scale with units of 24 h). As expected, we found that affect indeed moderated the extent to which previous behaviour predicted similar behaviour later in time, at both beep- and day-level. This study showed that it is feasible to track reward-seeking and punishment-avoidant behaviour prospectively in humans in the flow of daily life. This opens up a new toolbox to examine processes determining goal-oriented behaviour in relation to psychopathology in humans. PMID:26087323

  19. From Affective Experience to Motivated Action: Tracking Reward-Seeking and Punishment-Avoidant Behaviour in Real-Life.

    PubMed

    Wichers, Marieke; Kasanova, Zuzana; Bakker, Jindra; Thiery, Evert; Derom, Catherine; Jacobs, Nele; van Os, Jim

    2015-01-01

    Many of the decisions and actions in everyday life result from implicit learning processes. Important to psychopathology are, for example, implicit reward-seeking and punishment-avoidant learning processes. It is known that when specific actions get associated with a rewarding experience, such as positive emotions, that this will increase the likelihood that an organism will engage in similar actions in the future. Similarly, when actions get associated with punishing experiences, such as negative emotions, this may reduce the likelihood that the organism will engage in similar actions in the future. This study examines whether we can observe these implicit processes prospectively in the flow of daily life. If such processes take place then we expect that current behaviour can be predicted by how similar behaviour was experienced (in terms of positive and negative affect) at previous measurement moments. This was examined in a sample of 621 female individuals that had participated in an Experience Sampling data collection. Measures of affect and behaviour were collected at 10 semi-random moments of the day for 5 consecutive days. It was examined whether affective experience that was paired with certain behaviours (physical activity and social context) at previous measurements modified the likelihood to show similar behaviours at next measurement moments. Analyses were performed both at the level of observations (a time scale with units of ± 90 min) and at day level (a time scale with units of 24 h). As expected, we found that affect indeed moderated the extent to which previous behaviour predicted similar behaviour later in time, at both beep- and day-level. This study showed that it is feasible to track reward-seeking and punishment-avoidant behaviour prospectively in humans in the flow of daily life. This opens up a new toolbox to examine processes determining goal-oriented behaviour in relation to psychopathology in humans.

  20. Blade row interaction effects on flutter and forced response

    NASA Technical Reports Server (NTRS)

    Buffum, Daniel H.

    1993-01-01

    In the flutter or forced response analysis of a turbomachine blade row, the blade row in question is commonly treated as if it is isolated from the neigboring blade rows. Disturbances created by vibrating blades are then free to propagate away from this blade row without being disturbed. In reality, neighboring blade rows will reflect some portion of this wave energy back toward the vibrating blades, causing additional unsteady forces on them. It is of fundamental importance to determine whether or not these reflected waves can have a significant effect on the aeroelastic stability or forced response of a blade row. Therefore, a procedure to calculate intra-blade-row unsteady aerodynamic interactions was developed which relies upon results available from isolated blade row unsteady aerodynamic analyses. In addition, an unsteady aerodynamic influence coefficient technique is used to obtain a model for the vibratory response in which the neighboring blade rows are also flexible. The flutter analysis shows that interaction effects can be destabilizing, and the forced response analysis shows that interaction effects can result in a significant increase in the resonant response of a blade row.

Top