Science.gov

Sample records for activation likelihood estimation

  1. Local likelihood estimation

    SciTech Connect

    Tibshirani, R.J.

    1984-12-01

    In this work, we extend the idea of local averaging to likelihood-based regression models. One application is in the class of generalized linear models (Nelder and Wedderburn (1972). We enlarge this class by replacing the covariate form chi..beta.. with an unspecified smooth function s(chi). This function is estimated from the data by a technique we call Local Likelihood Estimation - a type of local averaging. Multiple covariates are incorporated through a forward stepwise algorithm. In a number of real data examples, the local likelihood technique proves to be effective in uncovering non-linear dependencies. Finally, we give some asymptotic results for local likelihood estimates and provide some methods for inference.

  2. Joint maximum likelihood estimation of activation and Hemodynamic Response Function for fMRI.

    PubMed

    Bazargani, Negar; Nosratinia, Aria

    2014-07-01

    Blood Oxygen Level Dependent (BOLD) functional magnetic resonance imaging (fMRI) maps the brain activity by measuring blood oxygenation level, which is related to brain activity via a temporal impulse response function known as the Hemodynamic Response Function (HRF). The HRF varies from subject to subject and within areas of the brain, therefore a knowledge of HRF is necessary for accurately computing voxel activations. Conversely a knowledge of active voxels is highly beneficial for estimating the HRF. This work presents a joint maximum likelihood estimation of HRF and activation based on low-rank matrix approximations operating on regions of interest (ROI). Since each ROI has limited data, a smoothing constraint on the HRF is employed via Tikhonov regularization. The method is analyzed under both white noise and colored noise. Experiments with synthetic data show that accurate estimation of the HRF is possible with this method without prior assumptions on the exact shape of the HRF. Further experiments involving real fMRI experiments with auditory stimuli are used to validate the proposed method. PMID:24835179

  3. Neuroimaging of reading intervention: a systematic review and activation likelihood estimate meta-analysis.

    PubMed

    Barquero, Laura A; Davis, Nicole; Cutting, Laurie E

    2014-01-01

    A growing number of studies examine instructional training and brain activity. The purpose of this paper is to review the literature regarding neuroimaging of reading intervention, with a particular focus on reading difficulties (RD). To locate relevant studies, searches of peer-reviewed literature were conducted using electronic databases to search for studies from the imaging modalities of fMRI and MEG (including MSI) that explored reading intervention. Of the 96 identified studies, 22 met the inclusion criteria for descriptive analysis. A subset of these (8 fMRI experiments with post-intervention data) was subjected to activation likelihood estimate (ALE) meta-analysis to investigate differences in functional activation following reading intervention. Findings from the literature review suggest differences in functional activation of numerous brain regions associated with reading intervention, including bilateral inferior frontal, superior temporal, middle temporal, middle frontal, superior frontal, and postcentral gyri, as well as bilateral occipital cortex, inferior parietal lobules, thalami, and insulae. Findings from the meta-analysis indicate change in functional activation following reading intervention in the left thalamus, right insula/inferior frontal, left inferior frontal, right posterior cingulate, and left middle occipital gyri. Though these findings should be interpreted with caution due to the small number of studies and the disparate methodologies used, this paper is an effort to synthesize across studies and to guide future exploration of neuroimaging and reading intervention. PMID:24427278

  4. Neuroimaging of Reading Intervention: A Systematic Review and Activation Likelihood Estimate Meta-Analysis

    PubMed Central

    Barquero, Laura A.; Davis, Nicole; Cutting, Laurie E.

    2014-01-01

    A growing number of studies examine instructional training and brain activity. The purpose of this paper is to review the literature regarding neuroimaging of reading intervention, with a particular focus on reading difficulties (RD). To locate relevant studies, searches of peer-reviewed literature were conducted using electronic databases to search for studies from the imaging modalities of fMRI and MEG (including MSI) that explored reading intervention. Of the 96 identified studies, 22 met the inclusion criteria for descriptive analysis. A subset of these (8 fMRI experiments with post-intervention data) was subjected to activation likelihood estimate (ALE) meta-analysis to investigate differences in functional activation following reading intervention. Findings from the literature review suggest differences in functional activation of numerous brain regions associated with reading intervention, including bilateral inferior frontal, superior temporal, middle temporal, middle frontal, superior frontal, and postcentral gyri, as well as bilateral occipital cortex, inferior parietal lobules, thalami, and insulae. Findings from the meta-analysis indicate change in functional activation following reading intervention in the left thalamus, right insula/inferior frontal, left inferior frontal, right posterior cingulate, and left middle occipital gyri. Though these findings should be interpreted with caution due to the small number of studies and the disparate methodologies used, this paper is an effort to synthesize across studies and to guide future exploration of neuroimaging and reading intervention. PMID:24427278

  5. Structural and functional neural adaptations in obstructive sleep apnea: An activation likelihood estimation meta-analysis.

    PubMed

    Tahmasian, Masoud; Rosenzweig, Ivana; Eickhoff, Simon B; Sepehry, Amir A; Laird, Angela R; Fox, Peter T; Morrell, Mary J; Khazaie, Habibolah; Eickhoff, Claudia R

    2016-06-01

    Obstructive sleep apnea (OSA) is a common multisystem chronic disorder. Functional and structural neuroimaging has been widely applied in patients with OSA, but these studies have often yielded diverse results. The present quantitative meta-analysis aims to identify consistent patterns of abnormal activation and grey matter loss in OSA across studies. We used PubMed to retrieve task/resting-state functional magnetic resonance imaging and voxel-based morphometry studies. Stereotactic data were extracted from fifteen studies, and subsequently tested for convergence using activation likelihood estimation. We found convergent evidence for structural atrophy and functional disturbances in the right basolateral amygdala/hippocampus and the right central insula. Functional characterization of these regions using the BrainMap database suggested associated dysfunction of emotional, sensory, and limbic processes. Assessment of task-based co-activation patterns furthermore indicated that the two regions obtained from the meta-analysis are part of a joint network comprising the anterior insula, posterior-medial frontal cortex and thalamus. Taken together, our findings highlight the role of right amygdala, hippocampus and insula in the abnormal emotional and sensory processing in OSA. PMID:27039344

  6. The Autonomic Brain: An Activation Likelihood Estimation Meta-Analysis for Central Processing of Autonomic Function

    PubMed Central

    Meissner, Karin; Bär, Karl-Jürgen; Napadow, Vitaly

    2013-01-01

    The autonomic nervous system (ANS) is of paramount importance for daily life. Its regulatory action on respiratory, cardiovascular, digestive, endocrine, and many other systems is controlled by a number of structures in the CNS. While the majority of these nuclei and cortices have been identified in animal models, neuroimaging studies have recently begun to shed light on central autonomic processing in humans. In this study, we used activation likelihood estimation to conduct a meta-analysis of human neuroimaging experiments evaluating central autonomic processing to localize (1) cortical and subcortical areas involved in autonomic processing, (2) potential subsystems for the sympathetic and parasympathetic divisions of the ANS, and (3) potential subsystems for specific ANS responses to different stimuli/tasks. Across all tasks, we identified a set of consistently activated brain regions, comprising left amygdala, right anterior and left posterior insula and midcingulate cortices that form the core of the central autonomic network. While sympathetic-associated regions predominate in executive- and salience-processing networks, parasympathetic regions predominate in the default mode network. Hence, central processing of autonomic function does not simply involve a monolithic network of brain regions, instead showing elements of task and division specificity. PMID:23785162

  7. Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation.

    PubMed

    Eickhoff, Simon B; Nichols, Thomas E; Laird, Angela R; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Claudia R

    2016-08-15

    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. PMID:27179606

  8. Cortical Midline Structures and Autobiographical-Self Processes: An Activation-Likelihood Estimation Meta-Analysis

    PubMed Central

    Araujo, Helder F.; Kaplan, Jonas; Damasio, Antonio

    2013-01-01

    The autobiographical-self refers to a mental state derived from the retrieval and assembly of memories regarding one’s biography. The process of retrieval and assembly, which can focus on biographical facts or personality traits or some combination thereof, is likely to vary according to the domain chosen for an experiment. To date, the investigation of the neural basis of this process has largely focused on the domain of personality traits using paradigms that contrasted the evaluation of one’s traits (self-traits) with those of another person’s (other-traits). This has led to the suggestion that cortical midline structures (CMSs) are specifically related to self states. Here, with the goal of testing this suggestion, we conducted activation-likelihood estimation (ALE) meta-analyses based on data from 28 neuroimaging studies. The ALE results show that both self-traits and other-traits engage CMSs; however, the engagement of medial prefrontal cortex is greater for self-traits than for other-traits, while the posteromedial cortex is more engaged for other-traits than for self-traits. These findings suggest that the involvement CMSs is not specific to the evaluation of one’s own traits, but also occurs during the evaluation of another person’s traits. PMID:24027520

  9. A meta-analysis of neuroimaging studies on divergent thinking using activation likelihood estimation.

    PubMed

    Wu, Xin; Yang, Wenjing; Tong, Dandan; Sun, Jiangzhou; Chen, Qunlin; Wei, Dongtao; Zhang, Qinglin; Zhang, Meng; Qiu, Jiang

    2015-07-01

    In this study, an activation likelihood estimation (ALE) meta-analysis was used to conduct a quantitative investigation of neuroimaging studies on divergent thinking. Based on the ALE results, the functional magnetic resonance imaging (fMRI) studies showed that distributed brain regions were more active under divergent thinking tasks (DTTs) than those under control tasks, but a large portion of the brain regions were deactivated. The ALE results indicated that the brain networks of the creative idea generation in DTTs may be composed of the lateral prefrontal cortex, posterior parietal cortex [such as the inferior parietal lobule (BA 40) and precuneus (BA 7)], anterior cingulate cortex (ACC) (BA 32), and several regions in the temporal cortex [such as the left middle temporal gyrus (BA 39), and left fusiform gyrus (BA 37)]. The left dorsolateral prefrontal cortex (BA 46) was related to selecting the loosely and remotely associated concepts and organizing them into creative ideas, whereas the ACC (BA 32) was related to observing and forming distant semantic associations in performing DTTs. The posterior parietal cortex may be involved in the semantic information related to the retrieval and buffering of the formed creative ideas, and several regions in the temporal cortex may be related to the stored long-term memory. In addition, the ALE results of the structural studies showed that divergent thinking was related to the dopaminergic system (e.g., left caudate and claustrum). Based on the ALE results, both fMRI and structural MRI studies could uncover the neural basis of divergent thinking from different aspects (e.g., specific cognitive processing and stable individual difference of cognitive capability). PMID:25891081

  10. Neural networks involved in adolescent reward processing: An activation likelihood estimation meta-analysis of functional neuroimaging studies.

    PubMed

    Silverman, Merav H; Jedd, Kelly; Luciana, Monica

    2015-11-15

    Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: (1) confirm the network of brain regions involved in adolescents' reward processing, (2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and (3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587

  11. Altered sensorimotor activation patterns in idiopathic dystonia—an activation likelihood estimation meta‐analysis of functional brain imaging studies

    PubMed Central

    Herz, Damian M.; Haagensen, Brian N.; Lorentzen, Anne K.; Eickhoff, Simon B.; Siebner, Hartwig R.

    2015-01-01

    Abstract Dystonia is characterized by sustained or intermittent muscle contractions causing abnormal, often repetitive, movements or postures. Functional neuroimaging studies have yielded abnormal task‐related sensorimotor activation in dystonia, but the results appear to be rather variable across studies. Further, study size was usually small including different types of dystonia. Here we performed an activation likelihood estimation (ALE) meta‐analysis of functional neuroimaging studies in patients with primary dystonia to test for convergence of dystonia‐related alterations in task‐related activity across studies. Activation likelihood estimates were based on previously reported regional maxima of task‐related increases or decreases in dystonia patients compared to healthy controls. The meta‐analyses encompassed data from 179 patients with dystonia reported in 18 functional neuroimaging studies using a range of sensorimotor tasks. Patients with dystonia showed bilateral increases in task‐related activation in the parietal operculum and ventral postcentral gyrus as well as right middle temporal gyrus. Decreases in task‐related activation converged in left supplementary motor area and left postcentral gyrus, right superior temporal gyrus and dorsal midbrain. Apart from the midbrain cluster, all between‐group differences in task‐related activity were retrieved in a sub‐analysis including only the 14 studies on patients with focal dystonia. For focal dystonia, an additional cluster of increased sensorimotor activation emerged in the caudal cingulate motor zone. The results show that dystonia is consistently associated with abnormal somatosensory processing in the primary and secondary somatosensory cortex along with abnormal sensorimotor activation of mesial premotor and right lateral temporal cortex. Hum Brain Mapp 37:547–557, 2016. © 2015 Wiley Periodicals, Inc. PMID:26549606

  12. Altered sensorimotor activation patterns in idiopathic dystonia-an activation likelihood estimation meta-analysis of functional brain imaging studies.

    PubMed

    Løkkegaard, Annemette; Herz, Damian M; Haagensen, Brian N; Lorentzen, Anne K; Eickhoff, Simon B; Siebner, Hartwig R

    2016-02-01

    Dystonia is characterized by sustained or intermittent muscle contractions causing abnormal, often repetitive, movements or postures. Functional neuroimaging studies have yielded abnormal task-related sensorimotor activation in dystonia, but the results appear to be rather variable across studies. Further, study size was usually small including different types of dystonia. Here we performed an activation likelihood estimation (ALE) meta-analysis of functional neuroimaging studies in patients with primary dystonia to test for convergence of dystonia-related alterations in task-related activity across studies. Activation likelihood estimates were based on previously reported regional maxima of task-related increases or decreases in dystonia patients compared to healthy controls. The meta-analyses encompassed data from 179 patients with dystonia reported in 18 functional neuroimaging studies using a range of sensorimotor tasks. Patients with dystonia showed bilateral increases in task-related activation in the parietal operculum and ventral postcentral gyrus as well as right middle temporal gyrus. Decreases in task-related activation converged in left supplementary motor area and left postcentral gyrus, right superior temporal gyrus and dorsal midbrain. Apart from the midbrain cluster, all between-group differences in task-related activity were retrieved in a sub-analysis including only the 14 studies on patients with focal dystonia. For focal dystonia, an additional cluster of increased sensorimotor activation emerged in the caudal cingulate motor zone. The results show that dystonia is consistently associated with abnormal somatosensory processing in the primary and secondary somatosensory cortex along with abnormal sensorimotor activation of mesial premotor and right lateral temporal cortex. Hum Brain Mapp 37:547-557, 2016. © 2015 Wiley Periodicals, Inc. PMID:26549606

  13. Maximum Likelihood Estimation in Generalized Rasch Models.

    ERIC Educational Resources Information Center

    de Leeuw, Jan; Verhelst, Norman

    1986-01-01

    Maximum likelihood procedures are presented for a general model to unify the various models and techniques that have been proposed for item analysis. Unconditional maximum likelihood estimation, proposed by Wright and Haberman, and conditional maximum likelihood estimation, proposed by Rasch and Andersen, are shown as important special cases. (JAZ)

  14. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    PubMed Central

    LaCroix, Arianna N.; Diaz, Alvaro F.; Rogalsky, Corianne

    2015-01-01

    The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music. PMID:26321976

  15. Reinforcement Learning Models and Their Neural Correlates: An Activation Likelihood Estimation Meta-Analysis

    PubMed Central

    Kumar, Poornima; Eickhoff, Simon B.; Dombrovski, Alexandre Y.

    2015-01-01

    Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments – prediction error – is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies suggest that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that employed algorithmic reinforcement learning models, across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, while instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually-estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies. PMID:25665667

  16. Reinforcement learning models and their neural correlates: An activation likelihood estimation meta-analysis.

    PubMed

    Chase, Henry W; Kumar, Poornima; Eickhoff, Simon B; Dombrovski, Alexandre Y

    2015-06-01

    Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments-prediction error-is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies have suggested that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that had employed algorithmic reinforcement learning models across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, whereas instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies. PMID:25665667

  17. Targeted maximum likelihood estimation in safety analysis

    PubMed Central

    Lendle, Samuel D.; Fireman, Bruce; van der Laan, Mark J.

    2013-01-01

    Objectives To compare the performance of a targeted maximum likelihood estimator (TMLE) and a collaborative TMLE (CTMLE) to other estimators in a drug safety analysis, including a regression-based estimator, propensity score (PS)–based estimators, and an alternate doubly robust (DR) estimator in a real example and simulations. Study Design and Setting The real data set is a subset of observational data from Kaiser Permanente Northern California formatted for use in active drug safety surveillance. Both the real and simulated data sets include potential confounders, a treatment variable indicating use of one of two antidiabetic treatments and an outcome variable indicating occurrence of an acute myocardial infarction (AMI). Results In the real data example, there is no difference in AMI rates between treatments. In simulations, the double robustness property is demonstrated: DR estimators are consistent if either the initial outcome regression or PS estimator is consistent, whereas other estimators are inconsistent if the initial estimator is not consistent. In simulations with near-positivity violations, CTMLE performs well relative to other estimators by adaptively estimating the PS. Conclusion Each of the DR estimators was consistent, and TMLE and CTMLE had the smallest mean squared error in simulations. PMID:23849159

  18. The neural bases of difficult speech comprehension and speech production: Two Activation Likelihood Estimation (ALE) meta-analyses.

    PubMed

    Adank, Patti

    2012-07-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension of less intelligible/distorted speech with more intelligible speech. Meta-analysis 2 (21 studies) identified areas associated with speech production. The results indicate that difficult comprehension involves increased reliance of cortical regions in which comprehension and production overlapped (bilateral anterior Superior Temporal Sulcus (STS) and anterior Supplementary Motor Area (pre-SMA)) and in an area associated with intelligibility processing (left posterior MTG), and second involves increased reliance on cortical areas associated with general executive processes (bilateral anterior insulae). Comprehension of distorted speech may be supported by a hybrid neural mechanism combining increased involvement of areas associated with general executive processing and areas shared between comprehension and production. PMID:22633697

  19. Localising semantic and syntactic processing in spoken and written language comprehension: an Activation Likelihood Estimation meta-analysis.

    PubMed

    Rodd, Jennifer M; Vitello, Sylvia; Woollams, Anna M; Adank, Patti

    2015-02-01

    We conducted an Activation Likelihood Estimation (ALE) meta-analysis to identify brain regions that are recruited by linguistic stimuli requiring relatively demanding semantic or syntactic processing. We included 54 functional MRI studies that explicitly varied the semantic or syntactic processing load, while holding constant demands on earlier stages of processing. We included studies that introduced a syntactic/semantic ambiguity or anomaly, used a priming manipulation that specifically reduced the load on semantic/syntactic processing, or varied the level of syntactic complexity. The results confirmed the critical role of the posterior left Inferior Frontal Gyrus (LIFG) in semantic and syntactic processing. These results challenge models of sentence comprehension highlighting the role of anterior LIFG for semantic processing. In addition, the results emphasise the posterior (but not anterior) temporal lobe for both semantic and syntactic processing. PMID:25576690

  20. Stuttering, Induced Fluency, and Natural Fluency: A Hierarchical Series of Activation Likelihood Estimation Meta-Analyses

    PubMed Central

    Budde, Kristin S.; Barron, Daniel S.; Fox, Peter T.

    2015-01-01

    Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as “neural signatures of stuttering” (Brown 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: 1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and 2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state). PMID:25463820

  1. Effects of Stimulus Type and Strategy on Mental Rotation Network: An Activation Likelihood Estimation Meta-Analysis

    PubMed Central

    Tomasino, Barbara; Gremese, Michele

    2016-01-01

    We can predict how an object would look like if we were to see it from different viewpoints. The brain network governing mental rotation (MR) has been studied using a variety of stimuli and tasks instructions. By using activation likelihood estimation (ALE) meta-analysis we tested whether different MR networks can be modulated by the type of stimulus (body vs. non-body parts) or by the type of tasks instructions (motor imagery-based vs. non-motor imagery-based MR instructions). Testing for the bodily and non-bodily stimulus axis revealed a bilateral sensorimotor activation for bodily-related as compared to non-bodily-related stimuli and a posterior right lateralized activation for non-bodily-related as compared to bodily-related stimuli. A top-down modulation of the network was exerted by the MR tasks instructions with a bilateral (preferentially sensorimotor left) network for motor imagery- vs. non-motor imagery-based MR instructions and the latter activating a preferentially posterior right occipito-temporal-parietal network. The present quantitative meta-analysis summarizes and amends previous descriptions of the brain network related to MR and shows how it is modulated by top-down and bottom-up experimental factors. PMID:26779003

  2. The neural network for tool-related cognition: An activation likelihood estimation meta-analysis of 70 neuroimaging contrasts

    PubMed Central

    Ishibashi, Ryo; Pobric, Gorana; Saito, Satoru; Lambon Ralph, Matthew A.

    2016-01-01

    ABSTRACT The ability to recognize and use a variety of tools is an intriguing human cognitive function. Multiple neuroimaging studies have investigated neural activations with various types of tool-related tasks. In the present paper, we reviewed tool-related neural activations reported in 70 contrasts from 56 neuroimaging studies and performed a series of activation likelihood estimation (ALE) meta-analyses to identify tool-related cortical circuits dedicated either to general tool knowledge or to task-specific processes. The results indicate the following: (a) Common, task-general processing regions for tools are located in the left inferior parietal lobule (IPL) and ventral premotor cortex; and (b) task-specific regions are located in superior parietal lobule (SPL) and dorsal premotor area for imagining/executing actions with tools and in bilateral occipito-temporal cortex for recognizing/naming tools. The roles of these regions in task-general and task-specific activities are discussed with reference to evidence from neuropsychology, experimental psychology and other neuroimaging studies. PMID:27362967

  3. Event-related fMRI studies of false memory: An Activation Likelihood Estimation meta-analysis.

    PubMed

    Kurkela, Kyle A; Dennis, Nancy A

    2016-01-29

    Over the last two decades, a wealth of research in the domain of episodic memory has focused on understanding the neural correlates mediating false memories, or memories for events that never happened. While several recent qualitative reviews have attempted to synthesize this literature, methodological differences amongst the empirical studies and a focus on only a sub-set of the findings has limited broader conclusions regarding the neural mechanisms underlying false memories. The current study performed a voxel-wise quantitative meta-analysis using activation likelihood estimation to investigate commonalities within the functional magnetic resonance imaging (fMRI) literature studying false memory. The results were broken down by memory phase (encoding, retrieval), as well as sub-analyses looking at differences in baseline (hit, correct rejection), memoranda (verbal, semantic), and experimental paradigm (e.g., semantic relatedness and perceptual relatedness) within retrieval. Concordance maps identified significant overlap across studies for each analysis. Several regions were identified in the general false retrieval analysis as well as multiple sub-analyses, indicating their ubiquitous, yet critical role in false retrieval (medial superior frontal gyrus, left precentral gyrus, left inferior parietal cortex). Additionally, several regions showed baseline- and paradigm-specific effects (hit/perceptual relatedness: inferior and middle occipital gyrus; CRs: bilateral inferior parietal cortex, precuneus, left caudate). With respect to encoding, analyses showed common activity in the left middle temporal gyrus and anterior cingulate cortex. No analysis identified a common cluster of activation in the medial temporal lobe. PMID:26683385

  4. Collaborative double robust targeted maximum likelihood estimation.

    PubMed

    van der Laan, Mark J; Gruber, Susan

    2010-01-01

    Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q(0) in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for "collaborative double robustness," demonstrating that the collaborative targeted maximum

  5. Collaborative Double Robust Targeted Maximum Likelihood Estimation*

    PubMed Central

    van der Laan, Mark J.; Gruber, Susan

    2010-01-01

    Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q0 in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for “collaborative double robustness,” demonstrating that the collaborative targeted maximum

  6. Estimating the Likelihood of Extreme Seismogenic Tsunamis

    NASA Astrophysics Data System (ADS)

    Geist, E. L.

    2011-12-01

    Because of high levels of destruction to coastal communities and critical facilities from recent tsunamis, estimating the likelihood of extreme seismogenic tsunamis has gained increased attention. Seismogenic tsunami generating capacity is directly related to the scalar seismic moment of the earthquake. As such, earthquake size distributions and recurrence can inform the likelihood of tsunami occurrence. The probability of extreme tsunamis is dependent on how the right-hand tail of the earthquake size distribution is specified. As evidenced by the 2004 Sumatra-Andaman and 2011 Tohoku earthquakes, it is likely that there is insufficient historical information to estimate the maximum earthquake magnitude (Mmax) for any specific subduction zone. Mmax may in fact not be a useful concept for subduction zones of significant length. Earthquake size distributions with a soft corner moment appear more consistent with global observations. Estimating the likelihood of extreme local tsunami runup is complicated by the fact that there is significant uncertainty in the scaling relationship between seismic moment and maximum local tsunami runup. This uncertainty arises from variations in source parameters specific to tsunami generation and the near-shore hydrodynamic response. The primary source effect is how slip is distributed along the fault relative to the overlying water depth. For high slip beneath deep water, shoaling amplification of the tsunami increases substantially according to Green's Law, compared to an equivalent amount of slip beneath shallow water. Both stochastic slip models and dynamic rupture models of tsunamigenic earthquakes are explored in a probabilistic context. The nearshore hydrodynamic response includes attenuating mechanisms, such as wave breaking, and amplifying mechanisms, such as constructive interference of trapped and non-trapped modes. Probabilistic estimates of extreme tsunamis are therefore site specific, as indicated by significant variations

  7. Correlation Between Brain Activation Changes and Cognitive Improvement Following Cognitive Remediation Therapy in Schizophrenia: An Activation Likelihood Estimation Meta-analysis

    PubMed Central

    Wei, Yan-Yan; Wang, Ji-Jun; Yan, Chao; Li, Zi-Qiang; Pan, Xiao; Cui, Yi; Su, Tong; Liu, Tao-Sheng; Tang, Yun-Xiang

    2016-01-01

    Background: Several studies using functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have indicated that cognitive remediation therapy (CRT) might improve cognitive function by changing brain activations in patients with schizophrenia. However, the results were not consistent in these changed brain areas in different studies. The present activation likelihood estimation (ALE) meta-analysis was conducted to investigate whether cognitive function change was accompanied by the brain activation changes, and where the main areas most related to these changes were in schizophrenia patients after CRT. Analyses of whole-brain studies and whole-brain + region of interest (ROI) studies were compared to explore the effect of the different methodologies on the results. Methods: A computerized systematic search was conducted to collect fMRI and PET studies on brain activation changes in schizophrenia patients from pre- to post-CRT. Nine studies using fMRI techniques were included in the meta-analysis. Ginger ALE 2.3.1 was used to perform meta-analysis across these imaging studies. Results: The main areas with increased brain activation were in frontal and parietal lobe, including left medial frontal gyrus, left inferior frontal gyrus, right middle frontal gyrus, right postcentral gyrus, and inferior parietal lobule in patients after CRT, yet no decreased brain activation was found. Although similar increased activation brain areas were identified in ALE with or without ROI studies, analysis including ROI studies had a higher ALE value. Conclusions: The current findings suggest that CRT might improve the cognition of schizophrenia patients by increasing activations of the frontal and parietal lobe. In addition, it might provide more evidence to confirm results by including ROI studies in ALE meta-analysis. PMID:26904993

  8. LIKELIHOOD OF THE POWER SPECTRUM IN COSMOLOGICAL PARAMETER ESTIMATION

    SciTech Connect

    Sun, Lei; Wang, Qiao; Zhan, Hu

    2013-11-01

    The likelihood function is a crucial element of parameter estimation. In analyses of galaxy overdensities and weak lensing shear, one often approximates the likelihood of the power spectrum with a Gaussian distribution. The posterior probability derived from such a likelihood deviates considerably from the exact posterior on the largest scales probed by any survey, where the central limit theorem does not apply. We show that various forms of Gaussian likelihoods can have a significant impact on the estimation of the primordial non-Gaussianity parameter f{sub NL} from the galaxy angular power spectrum. The Gaussian plus log-normal likelihood, which has been applied successfully in analyses of the cosmic microwave background, outperforms the Gaussian likelihoods. Nevertheless, even if the exact likelihood of the power spectrum is used, the estimated parameters may be still biased. As such, the likelihoods and estimators need to be thoroughly examined for potential systematic errors.

  9. The Relative Performance of Targeted Maximum Likelihood Estimators

    PubMed Central

    Porter, Kristin E.; Gruber, Susan; van der Laan, Mark J.; Sekhon, Jasjeet S.

    2011-01-01

    There is an active debate in the literature on censored data about the relative performance of model based maximum likelihood estimators, IPCW-estimators, and a variety of double robust semiparametric efficient estimators. Kang and Schafer (2007) demonstrate the fragility of double robust and IPCW-estimators in a simulation study with positivity violations. They focus on a simple missing data problem with covariates where one desires to estimate the mean of an outcome that is subject to missingness. Responses by Robins, et al. (2007), Tsiatis and Davidian (2007), Tan (2007) and Ridgeway and McCaffrey (2007) further explore the challenges faced by double robust estimators and offer suggestions for improving their stability. In this article, we join the debate by presenting targeted maximum likelihood estimators (TMLEs). We demonstrate that TMLEs that guarantee that the parametric submodel employed by the TMLE procedure respects the global bounds on the continuous outcomes, are especially suitable for dealing with positivity violations because in addition to being double robust and semiparametric efficient, they are substitution estimators. We demonstrate the practical performance of TMLEs relative to other estimators in the simulations designed by Kang and Schafer (2007) and in modified simulations with even greater estimation challenges. PMID:21931570

  10. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  11. Mapping the "What" and "Where" Visual Cortices and Their Atrophy in Alzheimer's Disease: Combined Activation Likelihood Estimation with Voxel-Based Morphometry.

    PubMed

    Deng, Yanjia; Shi, Lin; Lei, Yi; Liang, Peipeng; Li, Kuncheng; Chu, Winnie C W; Wang, Defeng

    2016-01-01

    The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing "what" and "where" visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in "what" and "where" visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three "where" visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both "what" and "where" visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses. PMID:27445770

  12. Mapping the “What” and “Where” Visual Cortices and Their Atrophy in Alzheimer's Disease: Combined Activation Likelihood Estimation with Voxel-Based Morphometry

    PubMed Central

    Deng, Yanjia; Shi, Lin; Lei, Yi; Liang, Peipeng; Li, Kuncheng; Chu, Winnie C. W.; Wang, Defeng

    2016-01-01

    The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing “what” and “where” visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in “what” and “where” visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three “where” visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both “what” and “where” visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses. PMID:27445770

  13. Nonparametric identification and maximum likelihood estimation for hidden Markov models

    PubMed Central

    Alexandrovich, G.; Holzmann, H.; Leister, A.

    2016-01-01

    Nonparametric identification and maximum likelihood estimation for finite-state hidden Markov models are investigated. We obtain identification of the parameters as well as the order of the Markov chain if the transition probability matrices have full-rank and are ergodic, and if the state-dependent distributions are all distinct, but not necessarily linearly independent. Based on this identification result, we develop a nonparametric maximum likelihood estimation theory. First, we show that the asymptotic contrast, the Kullback–Leibler divergence of the hidden Markov model, also identifies the true parameter vector nonparametrically. Second, for classes of state-dependent densities which are arbitrary mixtures of a parametric family, we establish the consistency of the nonparametric maximum likelihood estimator. Here, identification of the mixing distributions need not be assumed. Numerical properties of the estimates and of nonparametric goodness of fit tests are investigated in a simulation study.

  14. Nonparametric maximum likelihood estimation for the multisample Wicksell corpuscle problem

    PubMed Central

    Chan, Kwun Chuen Gary; Qin, Jing

    2016-01-01

    We study nonparametric maximum likelihood estimation for the distribution of spherical radii using samples containing a mixture of one-dimensional, two-dimensional biased and three-dimensional unbiased observations. Since direct maximization of the likelihood function is intractable, we propose an expectation-maximization algorithm for implementing the estimator, which handles an indirect measurement problem and a sampling bias problem separately in the E- and M-steps, and circumvents the need to solve an Abel-type integral equation, which creates numerical instability in the one-sample problem. Extensions to ellipsoids are studied and connections to multiplicative censoring are discussed. PMID:27279657

  15. A maximum likelihood approach to estimating correlation functions

    SciTech Connect

    Baxter, Eric Jones; Rozo, Eduardo

    2013-12-10

    We define a maximum likelihood (ML for short) estimator for the correlation function, ξ, that uses the same pair counting observables (D, R, DD, DR, RR) as the standard Landy and Szalay (LS for short) estimator. The ML estimator outperforms the LS estimator in that it results in smaller measurement errors at any fixed random point density. Put another way, the ML estimator can reach the same precision as the LS estimator with a significantly smaller random point catalog. Moreover, these gains are achieved without significantly increasing the computational requirements for estimating ξ. We quantify the relative improvement of the ML estimator over the LS estimator and discuss the regimes under which these improvements are most significant. We present a short guide on how to implement the ML estimator and emphasize that the code alterations required to switch from an LS to an ML estimator are minimal.

  16. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  17. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  18. A maximum-likelihood estimation of pairwise relatedness for autopolyploids

    PubMed Central

    Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G

    2015-01-01

    Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210

  19. A Targeted Maximum Likelihood Estimator for Two-Stage Designs

    PubMed Central

    Rose, Sherri; van der Laan, Mark J.

    2011-01-01

    We consider two-stage sampling designs, including so-called nested case control studies, where one takes a random sample from a target population and completes measurements on each subject in the first stage. The second stage involves drawing a subsample from the original sample, collecting additional data on the subsample. This data structure can be viewed as a missing data structure on the full-data structure collected in the second-stage of the study. Methods for analyzing two-stage designs include parametric maximum likelihood estimation and estimating equation methodology. We propose an inverse probability of censoring weighted targeted maximum likelihood estimator (IPCW-TMLE) in two-stage sampling designs and present simulation studies featuring this estimator. PMID:21556285

  20. Structural brain changes associated with antipsychotic treatment in schizophrenia as revealed by voxel-based morphometric MRI: an activation likelihood estimation meta-analysis

    PubMed Central

    2013-01-01

    Background The results of multiple studies on the association between antipsychotic use and structural brain changes in schizophrenia have been assessed only in qualitative literature reviews to date. We aimed to perform a meta-analysis of voxel-based morphometry (VBM) studies on this association to quantitatively synthesize the findings of these studies. Methods A systematic computerized literature search was carried out through MEDLINE/PubMed, EMBASE, ISI Web of Science, SCOPUS and PsycINFO databases aiming to identify all VBM studies addressing this question and meeting predetermined inclusion criteria. All studies reporting coordinates representing foci of structural brain changes associated with antipsychotic use were meta-analyzed by using the activation likelihood estimation technique, currently the most sophisticated and best-validated tool for voxel-wise meta-analysis of neuroimaging studies. Results Ten studies (five cross-sectional and five longitudinal) met the inclusion criteria and comprised a total of 548 individuals (298 patients on antipsychotic drugs and 250 controls). Depending on the methodologies of the selected studies, the control groups included healthy subjects, drug-free patients, or the same patients evaluated repeatedly in longitudinal comparisons (i.e., serving as their own controls). A total of 102 foci associated with structural alterations were retrieved. The meta-analysis revealed seven clusters of areas with consistent structural brain changes in patients on antipsychotics compared to controls. The seven clusters included four areas of relative volumetric decrease in the left lateral temporal cortex [Brodmann area (BA) 20], left inferior frontal gyrus (BA 44), superior frontal gyrus extending to the left middle frontal gyrus (BA 6), and right rectal gyrus (BA 11), and three areas of relative volumetric increase in the left dorsal anterior cingulate cortex (BA 24), left ventral anterior cingulate cortex (BA 24) and right putamen

  1. Contrastive Pessimistic Likelihood Estimation for Semi-Supervised Classification.

    PubMed

    Loog, Marco

    2016-03-01

    Improvement guarantees for semi-supervised classifiers can currently only be given under restrictive conditions on the data. We propose a general way to perform semi-supervised parameter estimation for likelihood-based classifiers for which, on the full training set, the estimates are never worse than the supervised solution in terms of the log-likelihood. We argue, moreover, that we may expect these solutions to really improve upon the supervised classifier in particular cases. In a worked-out example for LDA, we take it one step further and essentially prove that its semi-supervised version is strictly better than its supervised counterpart. The two new concepts that form the core of our estimation principle are contrast and pessimism. The former refers to the fact that our objective function takes the supervised estimates into account, enabling the semi-supervised solution to explicitly control the potential improvements over this estimate. The latter refers to the fact that our estimates are conservative and therefore resilient to whatever form the true labeling of the unlabeled data takes on. Experiments demonstrate the improvements in terms of both the log-likelihood and the classification error rate on independent test sets. PMID:27046491

  2. Maximum likelihood estimation of shear wave speed in transient elastography.

    PubMed

    Audière, Stéphane; Angelini, Elsa D; Sandrin, Laurent; Charbit, Maurice

    2014-06-01

    Ultrasonic transient elastography (TE), enables to assess, under active mechanical constraints, the elasticity of the liver, which correlates with hepatic fibrosis stages. This technique is routinely used in clinical practice to assess noninvasively liver stiffness. The Fibroscan system used in this work generates a shear wave via an impulse stress applied on the surface of the skin and records a temporal series of radio-frequency (RF) lines using a single-element ultrasound probe. A shear wave propagation map (SWPM) is generated as a 2-D map of the displacements along depth and time, derived from the correlations of the sequential 1-D RF lines, assuming that the direction of propagation (DOP) of the shear wave coincides with the ultrasound beam axis (UBA). Under the assumption of pure elastic tissue, elasticity is proportional to the shear wave speed. This paper introduces a novel approach to the processing of the SWPM, deriving the maximum likelihood estimate of the shear wave speed when comparing the observed displacements and the estimates provided by the Green's functions. A simple parametric model is used to interface Green's theoretical values of noisy measures provided by the SWPM, taking into account depth-varying attenuation and time-delay. The proposed method was evaluated on numerical simulations using a finite element method simulator and on physical phantoms. Evaluation on this test database reported very high agreements of shear wave speed measures when DOP and UBA coincide. PMID:24835213

  3. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    NASA Astrophysics Data System (ADS)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  4. Maximum-Likelihood Fits to Histograms for Improved Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Fowler, J. W.

    2014-08-01

    Straightforward methods for adapting the familiar statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K fluorescence spectrum, a poor choice of can lead to biases of at least 10 % in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  5. Skewness for Maximum Likelihood Estimators of the Negative Binomial Distribution

    SciTech Connect

    Bowman, Kimiko o

    2007-01-01

    The probability generating function of one version of the negative binomial distribution being (p + 1 - pt){sup -k}, we study elements of the Hessian and in particular Fisher's discovery of a series form for the variance of k, the maximum likelihood estimator, and also for the determinant of the Hessian. There is a link with the Psi function and its derivatives. Basic algebra is excessively complicated and a Maple code implementation is an important task in the solution process. Low order maximum likelihood moments are given and also Fisher's examples relating to data associated with ticks on sheep. Efficiency of moment estimators is mentioned, including the concept of joint efficiency. In an Addendum we give an interesting formula for the difference of two Psi functions.

  6. Efficient Pairwise Composite Likelihood Estimation for Spatial-Clustered Data

    PubMed Central

    Bai, Yun; Kang, Jian; Song, Peter X.-K.

    2015-01-01

    Summary Spatial-clustered data refer to high-dimensional correlated measurements collected from units or subjects that are spatially clustered. Such data arise frequently from studies in social and health sciences. We propose a unified modeling framework, termed as GeoCopula, to characterize both large-scale variation, and small-scale variation for various data types, including continuous data, binary data, and count data as special cases. To overcome challenges in the estimation and inference for the model parameters, we propose an efficient composite likelihood approach in that the estimation efficiency is resulted from a construction of over-identified joint composite estimating equations. Consequently, the statistical theory for the proposed estimation is developed by extending the classical theory of the generalized method of moments. A clear advantage of the proposed estimation method is the computation feasibility. We conduct several simulation studies to assess the performance of the proposed models and estimation methods for both Gaussian and binary spatial-clustered data. Results show a clear improvement on estimation efficiency over the conventional composite likelihood method. An illustrative data example is included to motivate and demonstrate the proposed method. PMID:24945876

  7. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062

  8. Digital combining-weight estimation for broadband sources using maximum-likelihood estimates

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.; Vilnrotter, V. A.

    1994-01-01

    An algorithm described for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system is compared with the maximum-likelihood estimate. This provides some improvement in performance, with an increase in computational complexity. However, the maximum-likelihood algorithm is simple enough to allow implementation on a PC-based combining system.

  9. Maximum likelihood estimation for distributed parameter models of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Williams, J. L.

    1989-01-01

    A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.

  10. Approximate maximum likelihood estimation of scanning observer templates

    NASA Astrophysics Data System (ADS)

    Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.

    2015-03-01

    In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.

  11. The effect of high leverage points on the maximum estimated likelihood for separation in logistic regression

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah; Arasan, Jayanthi; Rana, Md Sohel

    2015-02-01

    This article is concerned with the performance of the maximum estimated likelihood estimator in the presence of separation in the space of the independent variables and high leverage points. The maximum likelihood estimator suffers from the problem of non overlap cases in the covariates where the regression coefficients are not identifiable and the maximum likelihood estimator does not exist. Consequently, iteration scheme fails to converge and gives faulty results. To remedy this problem, the maximum estimated likelihood estimator is put forward. It is evident that the maximum estimated likelihood estimator is resistant against separation and the estimates always exist. The effect of high leverage points are then investigated on the performance of maximum estimated likelihood estimator through real data sets and Monte Carlo simulation study. The findings signify that the maximum estimated likelihood estimator fails to provide better parameter estimates in the presence of both separation, and high leverage points.

  12. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  13. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  14. Precision of maximum likelihood estimation in adaptive designs.

    PubMed

    Graf, Alexandra Christine; Gutjahr, Georg; Brannath, Werner

    2016-03-15

    There has been increasing interest in trials that allow for design adaptations like sample size reassessment or treatment selection at an interim analysis. Ignoring the adaptive and multiplicity issues in such designs leads to an inflation of the type 1 error rate, and treatment effect estimates based on the maximum likelihood principle become biased. Whereas the methodological issues concerning hypothesis testing are well understood, it is not clear how to deal with parameter estimation in designs were adaptation rules are not fixed in advanced so that, in practice, the maximum likelihood estimate (MLE) is used. It is therefore important to understand the behavior of the MLE in such designs. The investigation of Bias and mean squared error (MSE) is complicated by the fact that the adaptation rules need not be fully specified in advance and, hence, are usually unknown. To investigate Bias and MSE under such circumstances, we search for the sample size reassessment and selection rules that lead to the maximum Bias or maximum MSE. Generally, this leads to an overestimation of Bias and MSE, which can be reduced by imposing realistic constraints on the rules like, for example, a maximum sample size. We consider designs that start with k treatment groups and a common control and where selection of a single treatment and control is performed at the interim analysis with the possibility to reassess each of the sample sizes. We consider the case of unlimited sample size reassessments as well as several realistically restricted sample size reassessment rules. PMID:26459506

  15. A penalized likelihood approach for robust estimation of isoform expression

    PubMed Central

    2016-01-01

    Ultra high-throughput sequencing of transcriptomes (RNA-Seq) has enabled the accurate estimation of gene expression at individual isoform level. However, systematic biases introduced during the sequencing and mapping processes as well as incompleteness of the transcript annotation databases may cause the estimates of isoform abundances to be unreliable, and in some cases, highly inaccurate. This paper introduces a penalized likelihood approach to detect and correct for such biases in a robust manner. Our model extends those previously proposed by introducing bias parameters for reads. An L1 penalty is used for the selection of non-zero bias parameters. We introduce an efficient algorithm for model fitting and analyze the statistical properties of the proposed model. Our experimental studies on both simulated and real datasets suggest that the model has the potential to improve isoform-specific gene expression estimates and identify incompletely annotated gene models.

  16. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374

  17. Stochastic Maximum Likelihood (SML) parametric estimation of overlapped Doppler echoes

    NASA Astrophysics Data System (ADS)

    Boyer, E.; Petitdidier, M.; Larzabal, P.

    2004-11-01

    This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML) estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC) in Arecibo, Puerto Rico, in 1998.

  18. Maximum-likelihood estimation of recent shared ancestry (ERSA)

    PubMed Central

    Huff, Chad D.; Witherspoon, David J.; Simonson, Tatum S.; Xing, Jinchuan; Watkins, W. Scott; Zhang, Yuhua; Tuohy, Therese M.; Neklason, Deborah W.; Burt, Randall W.; Guthery, Stephen L.; Woodward, Scott R.; Jorde, Lynn B.

    2011-01-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package. PMID:21324875

  19. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L.; DuFrain, R.J.

    1986-03-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  20. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  1. The numerical evaluation of the maximum-likelihood estimate of a subset of mixture proportions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    Necessary and sufficient conditions are given for a maximum likelihood estimate of a subset of mixture proportions. From these conditions, likelihood equations are derived satisfied by the maximum-likelihood estimate and a successive-approximations procedure is discussed as suggested by equations for numerically evaluating the maximum-likelihood estimate. It is shown that, with probability one for large samples, this procedure converges locally to the maximum-likelihood estimate whenever a certain step-size lies between zero and two. Furthermore, optimal rates of local convergence are obtained for a step-size which is bounded below by a number between one and two.

  2. Fluorescence resonance energy transfer imaging by maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yupeng; Yuan, Yumin; Holmes, Timothy J.

    2004-06-01

    Fluorescence resonance energy transfer (FRET) is a fluorescence microscope imaging process involving nonradiative energy transfer between two fluorophores (the donor and the acceptor). FRET is used to detect the chemical interactions and, in some cases, measure the distance between molecules. Existing approaches do not always well compensate for bleed-through in excitation, cross-talk in emission detection and electronic noise in image acquisition. We have developed a system to automatically search for maximum-likelihood estimates of the FRET image, donor concentration and acceptor concentration. It also produces other system parameters, such as excitation/emission filter efficiency and FRET conversion factor. The mathematical model is based upon a Poisson process since the CCD camera is a photon-counting device. The main advantage of the approach is that it automatically compensates for bleed-through and cross-talk degradations. Tests are presented with synthetic images and with real data referred to as positive and negative controls, where FRET is known to occur and to not occur, respectively. The test results verify the claimed advantages by showing consistent accuracy in detecting FRET and by showing improved accuracy in calculating FRET efficiency.

  3. Identification of Sparse Neural Functional Connectivity using Penalized Likelihood Estimation and Basis Functions

    PubMed Central

    Song, Dong; Wang, Haonan; Tu, Catherine Y.; Marmarelis, Vasilis Z.; Hampson, Robert E.; Deadwyler, Sam A.; Berger, Theodore W.

    2013-01-01

    One key problem in computational neuroscience and neural engineering is the identification and modeling of functional connectivity in the brain using spike train data. To reduce model complexity, alleviate overfitting, and thus facilitate model interpretation, sparse representation and estimation of functional connectivity is needed. Sparsities include global sparsity, which captures the sparse connectivities between neurons, and local sparsity, which reflects the active temporal ranges of the input-output dynamical interactions. In this paper, we formulate a generalized functional additive model (GFAM) and develop the associated penalized likelihood estimation methods for such a modeling problem. A GFAM consists of a set of basis functions convolving the input signals, and a link function generating the firing probability of the output neuron from the summation of the convolutions weighted by the sought model coefficients. Model sparsities are achieved by using various penalized likelihood estimations and basis functions. Specifically, we introduce two variations of the GFAM using a global basis (e.g., Laguerre basis) and group LASSO estimation, and a local basis (e.g., B-spline basis) and group bridge estimation, respectively. We further develop an optimization method based on quadratic approximation of the likelihood function for the estimation of these models. Simulation and experimental results show that both group-LASSO-Laguerre and group-bridge-B-spline can capture faithfully the global sparsities, while the latter can replicate accurately and simultaneously both global and local sparsities. The sparse models outperform the full models estimated with the standard maximum likelihood method in out-of-sample predictions. PMID:23674048

  4. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  5. Item Parameter Estimation via Marginal Maximum Likelihood and an EM Algorithm: A Didactic.

    ERIC Educational Resources Information Center

    Harwell, Michael R.; And Others

    1988-01-01

    The Bock and Aitkin Marginal Maximum Likelihood/EM (MML/EM) approach to item parameter estimation is an alternative to the classical joint maximum likelihood procedure of item response theory. This paper provides the essential mathematical details of a MML/EM solution and shows its use in obtaining consistent item parameter estimates. (TJH)

  6. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  7. Bayesian speckle tracking. Part I: an implementable perturbation to the likelihood function for ultrasound displacement estimation.

    PubMed

    Byram, Brett; Trahey, Gregg E; Palmeri, Mark

    2013-01-01

    Accurate and precise displacement estimation has been a hallmark of clinical ultrasound. Displacement estimation accuracy has largely been considered to be limited by the Cramer-Rao lower bound (CRLB). However, the CRLB only describes the minimum variance obtainable from unbiased estimators. Unbiased estimators are generally implemented using Bayes' theorem, which requires a likelihood function. The classic likelihood function for the displacement estimation problem is not discriminative and is difficult to implement for clinically relevant ultrasound with diffuse scattering. Because the classic likelihood function is not effective, a perturbation is proposed. The proposed likelihood function was evaluated and compared against the classic likelihood function by converting both to posterior probability density functions (PDFs) using a noninformative prior. Example results are reported for bulk motion simulations using a 6λ tracking kernel and 30 dB SNR for 1000 data realizations. The canonical likelihood function assigned the true displacement a mean probability of only 0.070 ± 0.020, whereas the new likelihood function assigned the true displacement a much higher probability of 0.22 ± 0.16. The new likelihood function shows improvements at least for bulk motion, acoustic radiation force induced motion, and compressive motion, and at least for SNRs greater than 10 dB and kernel lengths between 1.5 and 12λ. PMID:23287920

  8. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  9. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  10. Building unbiased estimators from non-gaussian likelihoods with application to shear estimation

    SciTech Connect

    Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; Slosar, Anze

    2015-01-15

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g| = 0.2.

  11. Building unbiased estimators from non-gaussian likelihoods with application to shear estimation

    DOE PAGESBeta

    Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; Slosar, Anze

    2015-01-15

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore » of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g| = 0.2.« less

  12. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    SciTech Connect

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Anže E-mail: pvmcdonald@lbl.gov E-mail: anze@bnl.gov

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2.

  13. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  14. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  15. Application of maximum-likelihood estimation in optical coherence tomography for nanometer-class thickness estimation

    NASA Astrophysics Data System (ADS)

    Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.

    2015-03-01

    In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.

  16. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  17. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  18. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  19. Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure

    ERIC Educational Resources Information Center

    Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M.

    2014-01-01

    Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…

  20. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  1. Revising probability estimates: Why increasing likelihood means increasing impact.

    PubMed

    Maglio, Sam J; Polman, Evan

    2016-08-01

    Forecasted probabilities rarely stay the same for long. Instead, they are subject to constant revision-moving upward or downward, uncertain events become more or less likely. Yet little is known about how people interpret probability estimates beyond static snapshots, like a 30% chance of rain. Here, we consider the cognitive, affective, and behavioral consequences of revisions to probability forecasts. Stemming from a lay belief that revisions signal the emergence of a trend, we find in 10 studies (comprising uncertain events such as weather, climate change, sex, sports, and wine) that upward changes to event-probability (e.g., increasing from 20% to 30%) cause events to feel less remote than downward changes (e.g., decreasing from 40% to 30%), and subsequently change people's behavior regarding those events despite the revised event-probabilities being the same. Our research sheds light on how revising the probabilities for future events changes how people manage those uncertain events. (PsycINFO Database Record PMID:27281350

  2. Parameter estimation in astronomy through application of the likelihood ratio. [satellite data analysis techniques

    NASA Technical Reports Server (NTRS)

    Cash, W.

    1979-01-01

    Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.

  3. PsiMLE: A maximum-likelihood estimation approach to estimating psychophysical scaling and variability more reliably, efficiently, and flexibly.

    PubMed

    Odic, Darko; Im, Hee Yeon; Eisinger, Robert; Ly, Ryan; Halberda, Justin

    2016-06-01

    A simple and popular psychophysical model-usually described as overlapping Gaussian tuning curves arranged along an ordered internal scale-is capable of accurately describing both human and nonhuman behavioral performance and neural coding in magnitude estimation, production, and reproduction tasks for most psychological dimensions (e.g., time, space, number, or brightness). This model traditionally includes two parameters that determine how a physical stimulus is transformed into a psychological magnitude: (1) an exponent that describes the compression or expansion of the physical signal into the relevant psychological scale (β), and (2) an estimate of the amount of inherent variability (often called internal noise) in the Gaussian activations along the psychological scale (σ). To date, linear slopes on log-log plots have traditionally been used to estimate β, and a completely separate method of averaging coefficients of variance has been used to estimate σ. We provide a respectful, yet critical, review of these traditional methods, and offer a tutorial on a maximum-likelihood estimation (MLE) and a Bayesian estimation method for estimating both β and σ [PsiMLE(β,σ)], coupled with free software that researchers can use to implement it without a background in MLE or Bayesian statistics (R-PsiMLE). We demonstrate the validity, reliability, efficiency, and flexibility of this method through a series of simulations and behavioral experiments, and find the new method to be superior to the traditional methods in all respects. PMID:25987306

  4. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  5. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  6. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  7. Estimation of bias errors in measured airplane responses using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Klein, Vladiaslav; Morgan, Dan R.

    1987-01-01

    A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

  8. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  9. Out-of-atlas likelihood estimation using multi-atlas segmentation

    PubMed Central

    Asman, Andrew J.; Chambless, Lola B.; Thompson, Reid C.; Landman, Bennett A.

    2013-01-01

    Purpose: Multi-atlas segmentation has been shown to be highly robust and accurate across an extraordinary range of potential applications. However, it is limited to the segmentation of structures that are anatomically consistent across a large population of potential target subjects (i.e., multi-atlas segmentation is limited to “in-atlas” applications). Herein, the authors propose a technique to determine the likelihood that a multi-atlas segmentation estimate is representative of the problem at hand, and, therefore, identify anomalous regions that are not well represented within the atlases. Methods: The authors derive a technique to estimate the out-of-atlas (OOA) likelihood for every voxel in the target image. These estimated likelihoods can be used to determine and localize the probability of an abnormality being present on the target image. Results: Using a collection of manually labeled whole-brain datasets, the authors demonstrate the efficacy of the proposed framework on two distinct applications. First, the authors demonstrate the ability to accurately and robustly detect malignant gliomas in the human brain—an aggressive class of central nervous system neoplasms. Second, the authors demonstrate how this OOA likelihood estimation process can be used within a quality control context for diffusion tensor imaging datasets to detect large-scale imaging artifacts (e.g., aliasing and image shading). Conclusions: The proposed OOA likelihood estimation framework shows great promise for robust and rapid identification of brain abnormalities and imaging artifacts using only weak dependencies on anomaly morphometry and appearance. The authors envision that this approach would allow for application-specific algorithms to focus directly on regions of high OOA likelihood, which would (1) reduce the need for human intervention, and (2) reduce the propensity for false positives. Using the dual perspective, this technique would allow for algorithms to focus on

  10. A Maximum-Likelihood Method for the Estimation of Pairwise Relatedness in Structured Populations

    PubMed Central

    Anderson, Amy D.; Weir, Bruce S.

    2007-01-01

    A maximum-likelihood estimator for pairwise relatedness is presented for the situation in which the individuals under consideration come from a large outbred subpopulation of the population for which allele frequencies are known. We demonstrate via simulations that a variety of commonly used estimators that do not take this kind of misspecification of allele frequencies into account will systematically overestimate the degree of relatedness between two individuals from a subpopulation. A maximum-likelihood estimator that includes FST as a parameter is introduced with the goal of producing the relatedness estimates that would have been obtained if the subpopulation allele frequencies had been known. This estimator is shown to work quite well, even when the value of FST is misspecified. Bootstrap confidence intervals are also examined and shown to exhibit close to nominal coverage when FST is correctly specified. PMID:17339212

  11. A conditional likelihood is required to estimate the selection coefficient in ancient DNA

    PubMed Central

    Valleriani, Angelo

    2016-01-01

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct. PMID:27527811

  12. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  13. Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction

    ERIC Educational Resources Information Center

    Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.

    2009-01-01

    There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…

  14. On penalized likelihood estimation for a non-proportional hazards regression model.

    PubMed

    Devarajan, Karthik; Ebrahimi, Nader

    2013-07-01

    In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times. PMID:24791034

  15. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  16. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  17. Estimation of Maximum Likelihood of the Unextendable Dead Time Period in a Flow of Physical Events

    NASA Astrophysics Data System (ADS)

    Gortsev, A. M.; Solov'ev, A. A.

    2016-03-01

    A flow of physical events (photons, electrons, etc.) is studied. One of the mathematical models of such flows is the MAP-flow of events. The flow circulates under conditions of the unextendable dead time period, when the dead time period is unknown. The dead time period is estimated by the method of maximum likelihood from observations of arrival instants of events.

  18. Uniform Accuracy of the Maximum Likelihood Estimates for Probabilistic Models of Biological Sequences

    PubMed Central

    Ekisheva, Svetlana

    2010-01-01

    Probabilistic models for biological sequences (DNA and proteins) have many useful applications in bioinformatics. Normally, the values of parameters of these models have to be estimated from empirical data. However, even for the most common estimates, the maximum likelihood (ML) estimates, properties have not been completely explored. Here we assess the uniform accuracy of the ML estimates for models of several types: the independence model, the Markov chain and the hidden Markov model (HMM). Particularly, we derive rates of decay of the maximum estimation error by employing the measure concentration as well as the Gaussian approximation, and compare these rates. PMID:21318122

  19. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks.

    PubMed

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-01-01

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173

  20. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  1. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  2. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks

    PubMed Central

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-01-01

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173

  3. Maximum likelihood estimation with poisson (counting) statistics for waste drum inspection

    SciTech Connect

    Goodman, D.

    1997-05-01

    This note provides a preliminary look at the issues involved in waste drum inspection when emission levels are so low that central limit theorem arguments do not apply and counting statistics, rather than the usual Gaussian assumption, must be considered. At very high count rates the assumption of Gaussian statistics is reasonable, and the maximum likelihood arguments that we discuss below for low count rates would lead to the usual approach of least squares fits. Least squares is not the the best technique for low counts, and we will develop the maximum likelihood estimators for the low count case.

  4. Abundance estimation from multiple photo surveys: confidence distributions and reduced likelihoods for bowhead whales off Alaska.

    PubMed

    Schweder, Tore

    2003-12-01

    Maximum likelihood estimates of abundance are obtained from repeated photographic surveys of a closed stratified population with naturally marked and unmarked individuals. Capture intensities are assumed log-linear in stratum, year, and season. In the chosen model, an approximate confidence distribution for total abundance of bowhead whales, with an accompanying likelihood reduced of nuisance parameters, is found from a parametric bootstrap experiment. The confidence distribution depends on the assumed study protocol. A confidence distribution that is exact (except for the effect of discreteness) is found by conditioning in the unstratified case without unmarked individuals. PMID:14969476

  5. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  6. Estimating probability densities from short samples: A parametric maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Dudok de Wit, T.; Floriani, E.

    1998-10-01

    A parametric method similar to autoregressive spectral estimators is proposed to determine the probability density function (PDF) of a random set. The method proceeds by maximizing the likelihood of the PDF, yielding estimates that perform equally well in the tails as in the bulk of the distribution. It is therefore well suited for the analysis of short sets drawn from smooth PDF's and stands out by the simplicity of its computational scheme. Its advantages and limitations are discussed.

  7. Indoor Ultra-Wide Band Network Adjustment using Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C. K.

    2014-11-01

    This study is the part of our ongoing research on using ultra-wide band (UWB) technology for navigation at the Ohio State University. Our tests have indicated that the UWB two-way time-of-flight ranges under indoor circumstances follow a Gaussian mixture distribution that may be caused by the incompleteness of the functional model. In this case, to adjust the UWB network from the observed ranges, the maximum likelihood estimation (MLE) may provide a better solution for the node coordinates than the widely-used least squares approach. The prerequisite of the maximum likelihood method is to know the probability density functions. The 30 Hz sampling rate of the UWB sensors enables to estimate these functions between each node from the samples in static positioning mode. In order to prove the MLE hypothesis, an UWB network has been established in a multi-path density environment for test data acquisition. The least squares and maximum likelihood coordinate solutions are determined and compared, and the results indicate that better accuracy can be achieved with maximum likelihood estimation.

  8. A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.

    PubMed

    Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming

    2008-01-01

    This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system. PMID:19162641

  9. Intra-Die Spatial Correlation Extraction with Maximum Likelihood Estimation Method for Multiple Test Chips

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Luk, Wai-Shing; Tao, Jun; Zeng, Xuan; Cai, Wei

    In this paper, a novel intra-die spatial correlation extraction method referred to as MLEMTC (Maximum Likelihood Estimation for Multiple Test Chips) is presented. In the MLEMTC method, a joint likelihood function is formulated by multiplying the set of individual likelihood functions for all test chips. This joint likelihood function is then maximized to extract a unique group of parameter values of a single spatial correlation function, which can be used for statistical circuit analysis and design. Moreover, to deal with the purely random component and measurement error contained in measurement data, the spatial correlation function combined with the correlation of white noise is used in the extraction, which significantly improves the accuracy of the extraction results. Furthermore, an LU decomposition based technique is developed to calculate the log-determinant of the positive definite matrix within the likelihood function, which solves the numerical stability problem encountered in the direct calculation. Experimental results have shown that the proposed method is efficient and practical.

  10. Phantom study of tear film dynamics with optical coherence tomography and maximum-likelihood estimation

    PubMed Central

    Huang, Jinxin; Lee, Kye-sung; Clarkson, Eric; Kupinski, Matthew; Maki, Kara L.; Ross, David S.; Aquavella, James V.; Rolland, Jannick P.

    2016-01-01

    In this Letter, we implement a maximum-likelihood estimator to interpret optical coherence tomography (OCT) data for the first time, based on Fourier-domain OCT and a two-interface tear film model. We use the root mean square error as a figure of merit to quantify the system performance of estimating the tear film thickness. With the methodology of task-based assessment, we study the trade-off between system imaging speed (temporal resolution of the dynamics) and the precision of the estimation. Finally, the estimator is validated with a digital tear-film dynamics phantom. PMID:23938923

  11. Maximum-likelihood joint image reconstruction and motion estimation with misaligned attenuation in TOF-PET/CT

    NASA Astrophysics Data System (ADS)

    Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F.; Thielemans, Kris

    2016-02-01

    This work is an extension of our recent work on joint activity reconstruction/motion estimation (JRM) from positron emission tomography (PET) data. We performed JRM by maximization of the penalized log-likelihood in which the probabilistic model assumes that the same motion field affects both the activity distribution and the attenuation map. Our previous results showed that JRM can successfully reconstruct the activity distribution when the attenuation map is misaligned with the PET data, but converges slowly due to the significant cross-talk in the likelihood. In this paper, we utilize time-of-flight PET for JRM and demonstrate that the convergence speed is significantly improved compared to JRM with conventional PET data.

  12. Estimating Effective Population Size from Temporally Spaced Samples with a Novel, Efficient Maximum-Likelihood Algorithm

    PubMed Central

    Hui, Tin-Yu J.; Burt, Austin

    2015-01-01

    The effective population size Ne is a key parameter in population genetics and evolutionary biology, as it quantifies the expected distribution of changes in allele frequency due to genetic drift. Several methods of estimating Ne have been described, the most direct of which uses allele frequencies measured at two or more time points. A new likelihood-based estimator NB^ for contemporary effective population size using temporal data is developed in this article. The existing likelihood methods are computationally intensive and unable to handle the case when the underlying Ne is large. This article tries to work around this problem by using a hidden Markov algorithm and applying continuous approximations to allele frequencies and transition probabilities. Extensive simulations are run to evaluate the performance of the proposed estimator NB^, and the results show that it is more accurate and has lower variance than previous methods. The new estimator also reduces the computational time by at least 1000-fold and relaxes the upper bound of Ne to several million, hence allowing the estimation of larger Ne. Finally, we demonstrate how this algorithm can cope with nonconstant Ne scenarios and be used as a likelihood-ratio test to test for the equality of Ne throughout the sampling horizon. An R package “NB” is now available for download to implement the method described in this article. PMID:25747459

  13. Derivative-free restricted maximum likelihood estimation in animal models with a sparse matrix solver.

    PubMed

    Boldman, K G; Van Vleck, L D

    1991-12-01

    Estimation of (co)variance components by derivative-free REML requires repeated evaluation of the log-likelihood function of the data. Gaussian elimination of the augmented mixed model coefficient matrix is often used to evaluate the likelihood function, but it can be costly for animal models with large coefficient matrices. This study investigated the use of a direct sparse matrix solver to obtain the log-likelihood function. The sparse matrix package SPARSPAK was used to reorder the mixed model equations once and then repeatedly to solve the equations by Cholesky factorization to generate the terms required to calculate the likelihood. The animal model used for comparison contained 19 fixed levels, 470 maternal permanent environmental effects, and 1586 direct and 1586 maternal genetic effects, resulting in a coefficient matrix of order 3661 with .3% nonzero elements after including numerator relationships. Compared with estimation via Gaussian elimination of the unordered system, utilization of SPARSPAK required 605 and 240 times less central processing unit time on mainframes and personal computers, respectively. The SPARSPAK package also required less memory and provided solutions for all effects in the model. PMID:1787202

  14. Estimating sampling error of evolutionary statistics based on genetic covariance matrices using maximum likelihood.

    PubMed

    Houle, D; Meyer, K

    2015-08-01

    We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. PMID:26079756

  15. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    NASA Astrophysics Data System (ADS)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  16. Marginal Likelihood Estimate Comparisons to Obtain Optimal Species Delimitations in Silene sect. Cryptoneurae (Caryophyllaceae)

    PubMed Central

    Aydin, Zeynep; Marcussen, Thomas; Ertekin, Alaattin Selcuk; Oxelman, Bengt

    2014-01-01

    Coalescent-based inference of phylogenetic relationships among species takes into account gene tree incongruence due to incomplete lineage sorting, but for such methods to make sense species have to be correctly delimited. Because alternative assignments of individuals to species result in different parametric models, model selection methods can be applied to optimise model of species classification. In a Bayesian framework, Bayes factors (BF), based on marginal likelihood estimates, can be used to test a range of possible classifications for the group under study. Here, we explore BF and the Akaike Information Criterion (AIC) to discriminate between different species classifications in the flowering plant lineage Silene sect. Cryptoneurae (Caryophyllaceae). We estimated marginal likelihoods for different species classification models via the Path Sampling (PS), Stepping Stone sampling (SS), and Harmonic Mean Estimator (HME) methods implemented in BEAST. To select among alternative species classification models a posterior simulation-based analog of the AIC through Markov chain Monte Carlo analysis (AICM) was also performed. The results are compared to outcomes from the software BP&P. Our results agree with another recent study that marginal likelihood estimates from PS and SS methods are useful for comparing different species classifications, and strongly support the recognition of the newly described species S. ertekinii. PMID:25216034

  17. Maximum likelihood estimation for model Mt,α for capture-recapture data with misidentification.

    PubMed

    Vale, R T R; Fewster, R M; Carroll, E L; Patenaude, N J

    2014-12-01

    We investigate model Mt,α  for abundance estimation in closed-population capture-recapture studies, where animals are identified from natural marks such as DNA profiles or photographs of distinctive individual features. Model Mt,α  extends the classical model Mt  to accommodate errors in identification, by specifying that each sample identification is correct with probability α and false with probability 1-α. Information about misidentification is gained from a surplus of capture histories with only one entry, which arise from false identifications. We derive an exact closed-form expression for the likelihood for model Mt,α  and show that it can be computed efficiently, in contrast to previous studies which have held the likelihood to be computationally intractable. Our fast computation enables us to conduct a thorough investigation of the statistical properties of the maximum likelihood estimates. We find that the indirect approach to error estimation places high demands on data richness, and good statistical properties in terms of precision and bias require high capture probabilities or many capture occasions. When these requirements are not met, abundance is estimated with very low precision and negative bias, and at the extreme better properties can be obtained by the naive approach of ignoring misidentification error. We recommend that model Mt,α  be used with caution and other strategies for handling misidentification error be considered. We illustrate our study with genetic and photographic surveys of the New Zealand population of southern right whale (Eubalaena australis). PMID:24942186

  18. Sampling variability and estimates of density dependence: a composite-likelihood approach.

    PubMed

    Lele, Subhash R

    2006-01-01

    It is well known that sampling variability, if not properly taken into account, affects various ecologically important analyses. Statistical inference for stochastic population dynamics models is difficult when, in addition to the process error, there is also sampling error. The standard maximum-likelihood approach suffers from large computational burden. In this paper, I discuss an application of the composite-likelihood method for estimation of the parameters of the Gompertz model in the presence of sampling variability. The main advantage of the method of composite likelihood is that it reduces the computational burden substantially with little loss of statistical efficiency. Missing observations are a common problem with many ecological time series. The method of composite likelihood can accommodate missing observations in a straightforward fashion. Environmental conditions also affect the parameters of stochastic population dynamics models. This method is shown to handle such nonstationary population dynamics processes as well. Many ecological time series are short, and statistical inferences based on such short time series tend to be less precise. However, spatial replications of short time series provide an opportunity to increase the effective sample size. Application of likelihood-based methods for spatial time-series data for population dynamics models is computationally prohibitive. The method of composite likelihood is shown to have significantly less computational burden, making it possible to analyze large spatial time-series data. After discussing the methodology in general terms, I illustrate its use by analyzing a time series of counts of American Redstart (Setophaga ruticilla) from the Breeding Bird Survey data, San Joaquin kit fox (Vulpes macrotis mutica) population abundance data, and spatial time series of Bull trout (Salvelinus confluentus) redds count data. PMID:16634310

  19. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Arvind; Dorai, Kavita

    2016-09-01

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.

  20. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  1. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  2. Predicting bulk permeability using outcrop fracture attributes: The benefits of a Maximum Likelihood Estimator

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; De Siena, L.

    2015-12-01

    The success of any model prediction is largely dependent on the accuracy with which its parameters are known. In characterising fracture networks in naturally fractured rocks, the main issues are related with the difficulties in accurately up- and down-scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (fracture lengths, apertures, orientations and densities) represents a fundamental step which can aid the estimation of permeability and fluid flow, which are of primary importance in a number of contexts ranging from hydrocarbon production in fractured reservoirs and reservoir stimulation by hydrofracturing, to geothermal energy extraction and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. This work focuses on linking fracture data collected directly from outcrops to permeability estimation and fracture network modelling. Outcrop studies can supplement the limited data inherent to natural fractured systems in the subsurface. The study area is a highly fractured upper Miocene biosiliceous mudstone formation cropping out along the coastline north of Santa Cruz (California, USA). These unique outcrops exposes a recently active bitumen-bearing formation representing a geological analogue of a fractured top seal. In order to validate field observations as useful analogues of subsurface reservoirs, we describe a methodology of statistical analysis for more accurate probability distribution of fracture attributes, using Maximum Likelihood Estimators. These procedures aim to understand whether the average permeability of a fracture network can be predicted reducing its uncertainties, and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models.

  3. The effect of relatedness on likelihood ratios and the use of conservative estimates.

    PubMed

    Brookfield, J F

    1995-01-01

    DNA profiling can be used to identify criminals through their DNA matching that left at the scene of a crime. The strength of the evidence supplied by a match in DNA profiles is given by the likelihood ratio. This, in turn, depends upon the probability that a match would be produced if the suspect is innocent. This probability could be strongly affected by the possibility of relatedness between the suspect and the true source of the scene-of-crime DNA profile. Methods are shown that allow for the possibility of such relatedness, arising either through population substructure or through a family relationship. Uncertainties about the likelihood ratio have been taken as grounds for the use of very conservative estimates of this quantity. The use of such conservative estimates can be shown to be neither necessary nor harmless. PMID:7607450

  4. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-01

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  5. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications. PMID:26979681

  6. Determination of linear displacement by envelope detection with maximum likelihood estimation

    SciTech Connect

    Lang, Kuo-Chen; Teng, Hui-Kang

    2010-09-20

    We demonstrate in this report an envelope detection technique with maximum likelihood estimation in a least square sense for determining displacement. This technique is achieved by sampling the amplitudes of quadrature signals resulted from a heterodyne interferometer so that the resolution of displacement measurement of the order of {lambda}/10{sup 4} is experimentally verified. A phase unwrapping procedure is also described and experimentally demonstrated and indicates that the unambiguity range of displacement can be measured beyond a single wavelength.

  7. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, T.A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored-data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet-Crame??r-Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real-time water quality monitoring.

  8. A Targeted Maximum Likelihood Estimator of a Causal Effect on a Bounded Continuous Outcome

    PubMed Central

    Gruber, Susan; van der Laan, Mark J.

    2010-01-01

    Targeted maximum likelihood estimation of a parameter of a data generating distribution, known to be an element of a semi-parametric model, involves constructing a parametric model through an initial density estimator with parameter ɛ representing an amount of fluctuation of the initial density estimator, where the score of this fluctuation model at ɛ = 0 equals the efficient influence curve/canonical gradient. The latter constraint can be satisfied by many parametric fluctuation models since it represents only a local constraint of its behavior at zero fluctuation. However, it is very important that the fluctuations stay within the semi-parametric model for the observed data distribution, even if the parameter can be defined on fluctuations that fall outside the assumed observed data model. In particular, in the context of sparse data, by which we mean situations where the Fisher information is low, a violation of this property can heavily affect the performance of the estimator. This paper presents a fluctuation approach that guarantees the fluctuated density estimator remains inside the bounds of the data model. We demonstrate this in the context of estimation of a causal effect of a binary treatment on a continuous outcome that is bounded. It results in a targeted maximum likelihood estimator that inherently respects known bounds, and consequently is more robust in sparse data situations than the targeted MLE using a naive fluctuation model. When an estimation procedure incorporates weights, observations having large weights relative to the rest heavily influence the point estimate and inflate the variance. Truncating these weights is a common approach to reducing the variance, but it can also introduce bias into the estimate. We present an alternative targeted maximum likelihood estimation (TMLE) approach that dampens the effect of these heavily weighted observations. As a substitution estimator, TMLE respects the global constraints of the observed data

  9. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  10. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  11. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  12. Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function

    NASA Astrophysics Data System (ADS)

    LIU, Liang; WEI, Ping; LIAO, Hong Shu

    Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.

  13. A calibration method of self-referencing interferometry based on maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Li, Dahai; Li, Mengyang; E, Kewei; Guo, Guangrao

    2015-05-01

    Self-referencing interferometry has been widely used in wavefront sensing. However, currently the results of wavefront measurement include two parts, one is the real phase information of wavefront under test and the other is the system error in self-referencing interferometer. In this paper, a method based on maximum likelihood estimation is presented to calibrate the system error in self-referencing interferometer. Firstly, at least three phase difference distributions are obtained by three position measurements of the tested component: one basic position, one rotation and one lateral translation. Then, combining the three phase difference data and using the maximum likelihood method to create a maximum likelihood function, reconstructing the wavefront under test and the system errors by least square estimation and Zernike polynomials. The simulation results show that the proposed method can deal with the issue of calibration of a self-referencing interferometer. The method can be used to reduce the effect of system errors on extracting and reconstructing the wavefront under test, and improve the measurement accuracy of the self-referencing interferometer.

  14. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  15. An approximate likelihood estimator for the prevalence of infections in vectors using pools of varying sizes.

    PubMed

    Santos, James D; Dorgam, Diana

    2016-09-01

    There are several arthropods that can transmit disease to humans. To make inferences about the rate of infection of these arthropods, it is common to collect a large sample of vectors, divide them into groups (called pools), and apply a test to detect infection. This paper presents an approximate likelihood point estimator to rate of infection for pools of different sizes, when the variability of these sizes is small and the infection rate is low. The performance of this estimator was evaluated in four simulated scenarios, created from real experiments selected in the literature. The new estimator performed well in three of these scenarios. As expected, the new estimator performed poorly in the scenario with great variability in the size of the pools for some values of the parameter space. PMID:27159117

  16. Maximum-likelihood estimation in Optical Coherence Tomography in the context of the tear film dynamics

    PubMed Central

    Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Lee, Kye-sung; Maki, Kara L.; Ross, David S.; Aquavella, James V.; Rolland, Jannick P.

    2013-01-01

    Understanding tear film dynamics is a prerequisite for advancing the management of Dry Eye Disease (DED). In this paper, we discuss the use of optical coherence tomography (OCT) and statistical decision theory to analyze the tear film dynamics of a digital phantom. We implement a maximum-likelihood (ML) estimator to interpret OCT data based on mathematical models of Fourier-Domain OCT and the tear film. With the methodology of task-based assessment, we quantify the tradeoffs among key imaging system parameters. We find, on the assumption that the broadband light source is characterized by circular Gaussian statistics, ML estimates of 40 nm +/− 4 nm for an axial resolution of 1 μm and an integration time of 5 μs. Finally, the estimator is validated with a digital phantom of tear film dynamics, which reveals estimates of nanometer precision. PMID:24156045

  17. Fuzzy modeling, maximum likelihood estimation, and Kalman filtering for target tracking in NLOS scenarios

    NASA Astrophysics Data System (ADS)

    Yan, Jun; Yu, Kegen; Wu, Lenan

    2014-12-01

    To mitigate the non-line-of-sight (NLOS) effect, a three-step positioning approach is proposed in this article for target tracking. The possibility of each distance measurement under line-of-sight condition is first obtained by applying the truncated triangular probability-possibility transformation associated with fuzzy modeling. Based on the calculated possibilities, the measurements are utilized to obtain intermediate position estimates using the maximum likelihood estimation (MLE), according to identified measurement condition. These intermediate position estimates are then filtered using a linear Kalman filter (KF) to produce the final target position estimates. The target motion information and statistical characteristics of the MLE results are employed in updating the KF parameters. The KF position prediction is exploited for MLE parameter initialization and distance measurement selection. Simulation results demonstrate that the proposed approach outperforms the existing algorithms in the presence of unknown NLOS propagation conditions and achieves a performance close to that when propagation conditions are perfectly known.

  18. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  19. Maximum Likelihood Wavelet Density Estimation With Applications to Image and Shape Matching

    PubMed Central

    Peter, Adrian M.; Rangarajan, Anand

    2010-01-01

    Density estimation for observational data plays an integral role in a broad spectrum of applications, e.g., statistical data analysis and information-theoretic image registration. Of late, wavelet-based density estimators have gained in popularity due to their ability to approximate a large class of functions, adapting well to difficult situations such as when densities exhibit abrupt changes. The decision to work with wavelet density estimators brings along with it theoretical considerations (e.g., non-negativity, integrability) and empirical issues (e.g., computation of basis coefficients) that must be addressed in order to obtain a bona fide density. In this paper, we present a new method to accurately estimate a non-negative density which directly addresses many of the problems in practical wavelet density estimation. We cast the estimation procedure in a maximum likelihood framework which estimates the square root of the density p, allowing us to obtain the natural non-negative density representation (p)2. Analysis of this method will bring to light a remarkable theoretical connection with the Fisher information of the density and, consequently, lead to an efficient constrained optimization procedure to estimate the wavelet coefficients. We illustrate the effectiveness of the algorithm by evaluating its performance on mutual information-based image registration, shape point set alignment, and empirical comparisons to known densities. The present method is also compared to fixed and variable bandwidth kernel density estimators. PMID:18390355

  20. An inconsistency in the standard maximum likelihood estimation of bulk flows

    SciTech Connect

    Nusser, Adi

    2014-11-01

    Maximum likelihood estimation of the bulk flow from radial peculiar motions of galaxies generally assumes a constant velocity field inside the survey volume. This assumption is inconsistent with the definition of bulk flow as the average of the peculiar velocity field over the relevant volume. This follows from a straightforward mathematical relation between the bulk flow of a sphere and the velocity potential on its surface. This inconsistency also exists for ideal data with exact radial velocities and full spatial coverage. Based on the same relation, we propose a simple modification to correct for this inconsistency.

  1. The epoch state navigation filter. [for maximum likelihood estimates of position and velocity vectors

    NASA Technical Reports Server (NTRS)

    Battin, R. H.; Croopnick, S. R.; Edwards, J. A.

    1977-01-01

    The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.

  2. On the use of maximum likelihood estimation for the assembly of Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.; Ramakrishnan, Jayant

    1991-01-01

    Distributed parameter models of the Solar Array Flight Experiment, the Mini-MAST truss, and Space Station Freedom assembly are discussed. The distributed parameter approach takes advantage of (1) the relatively small number of model parameters associated with partial differential equation models of structural dynamics, (2) maximum-likelihood estimation using both prelaunch and on-orbit test data, (3) the inclusion of control system dynamics in the same equations, and (4) the incremental growth of the structural configurations. Maximum-likelihood parameter estimates for distributed parameter models were based on static compliance test results and frequency response measurements. Because the Space Station Freedom does not yet exist, the NASA Mini-MAST truss was used to test the procedure of modeling and parameter estimation. The resulting distributed parameter model of the Mini-MAST truss successfully demonstrated the approach taken. The computer program PDEMOD enables any configuration that can be represented by a network of flexible beam elements and rigid bodies to be remodeled.

  3. Accuracy of Maximum Likelihood Parameter Estimators for Heston Stochastic Volatility SDE

    NASA Astrophysics Data System (ADS)

    Azencott, Robert; Gadhyan, Yutheeka

    2015-04-01

    We study approximate maximum likelihood estimators (MLEs) for the parameters of the widely used Heston Stock price and volatility stochastic differential equations (SDEs). We compute explicit closed form estimators maximizing the discretized log-likelihood of observations recorded at times . We compute the asymptotic biases of these parameter estimators for fixed and , as well as the rate at which these biases vanish when . We determine asymptotically consistent explicit modifications of these MLEs. For the Heston volatility SDE, we identify a canonical form determined by two canonical parameters and which are explicit functions of the original SDE parameters. We analyze theoretically the asymptotic distribution of the MLEs and of their consistent modifications, and we outline their concrete speeds of convergence by numerical simulations. We clarify in terms of the precise dichotomy between asymptotic normality and attraction by stable like distributions with heavy tails. We illustrate numerical model fitting for Heston SDEs by two concrete examples, one for daily data and one for intraday data, both with moderate values of.

  4. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Zeng, Donglin; Mao, Lu; Lin, D. Y.

    2016-01-01

    Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  5. An algorithm for maximum likelihood estimation using an efficient method for approximating sensitivities

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1984-01-01

    An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.

  6. Maximum Likelihood Estimation of the Broken Power Law Spectral Parameters with Detector Design Applications

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The maximum likelihood procedure is developed for estimating the three spectral parameters of an assumed broken power law energy spectrum from simulated detector responses and their statistical properties investigated. The estimation procedure is then generalized for application to real cosmic-ray data. To illustrate the procedure and its utility, analytical methods were developed in conjunction with a Monte Carlo simulation to explore the combination of the expected cosmic-ray environment with a generic space-based detector and its planned life cycle, allowing us to explore various detector features and their subsequent influence on estimating the spectral parameters. This study permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.

  7. An application of collaborative targeted maximum likelihood estimation in causal inference and genomics.

    PubMed

    Gruber, Susan; van der Laan, Mark J

    2010-01-01

    A concrete example of the collaborative double-robust targeted likelihood estimator (C-TMLE) introduced in a companion article in this issue is presented, and applied to the estimation of causal effects and variable importance parameters in genomic data. The focus is on non-parametric estimation in a point treatment data structure. Simulations illustrate the performance of C-TMLE relative to current competitors such as the augmented inverse probability of treatment weighted estimator that relies on an external non-collaborative estimator of the treatment mechanism, and inefficient estimation procedures including propensity score matching and standard inverse probability of treatment weighting. C-TMLE is also applied to the estimation of the covariate-adjusted marginal effect of individual HIV mutations on resistance to the anti-retroviral drug lopinavir. The influence curve of the C-TMLE is used to establish asymptotically valid statistical inference. The list of mutations found to have a statistically significant association with resistance is in excellent agreement with mutation scores provided by the Stanford HIVdb mutation scores database. PMID:21731530

  8. An Application of Collaborative Targeted Maximum Likelihood Estimation in Causal Inference and Genomics

    PubMed Central

    Gruber, Susan; van der Laan, Mark J.

    2010-01-01

    A concrete example of the collaborative double-robust targeted likelihood estimator (C-TMLE) introduced in a companion article in this issue is presented, and applied to the estimation of causal effects and variable importance parameters in genomic data. The focus is on non-parametric estimation in a point treatment data structure. Simulations illustrate the performance of C-TMLE relative to current competitors such as the augmented inverse probability of treatment weighted estimator that relies on an external non-collaborative estimator of the treatment mechanism, and inefficient estimation procedures including propensity score matching and standard inverse probability of treatment weighting. C-TMLE is also applied to the estimation of the covariate-adjusted marginal effect of individual HIV mutations on resistance to the anti-retroviral drug lopinavir. The influence curve of the C-TMLE is used to establish asymptotically valid statistical inference. The list of mutations found to have a statistically significant association with resistance is in excellent agreement with mutation scores provided by the Stanford HIVdb mutation scores database. PMID:21731530

  9. Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    PubMed Central

    Chan, Aaron C.; Srinivasan, Vivek J.

    2013-01-01

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044

  10. A maximum likelihood approach to estimating articulator positions from speech acoustics

    SciTech Connect

    Hogden, J.

    1996-09-23

    This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.

  11. Maximum likelihood estimation of missing data applied to flow reconstruction around NACA profiles

    NASA Astrophysics Data System (ADS)

    Leroux, R.; Chatellier, L.; David, L.

    2015-10-01

    In this paper, we investigate the maximum likelihood estimation for missing data in fluid flows series. The maximum likelihood estimation is provided with the expectation-maximization (EM) algorithm applied to the linear and quadratic proper orthogonal decomposition POD-Galerkin reduced-order models (ROMs) for various sub-samplings of large data sets. The flows around a NACA0012 profile at Reynolds numbers of 103 and angle of incidence of 20^\\circ and a NACA0015 profile at Reynolds numbers of 105 and angle of incidence of 30^\\circ are first investigated using time-resolved particle image velocimetry measurements and sub-sampled according to different ratios of missing data. The EM algorithm is then applied to the POD ROMs constructed from the sub-sampled data sets. The results show that, depending on the sub-sampling used, the EM algorithm is robust with respect to the Reynolds number and can reproduce the velocity fields and the main structures of the missing flow fields for 50% and 75% of missing data.

  12. Maximum likelihood estimators for truncated and censored power-law distributions show how neuronal avalanches may be misevaluated.

    PubMed

    Langlois, Dominic; Cousineau, Denis; Thivierge, J P

    2014-01-01

    The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data. PMID:24580259

  13. Maximum likelihood estimators for truncated and censored power-law distributions show how neuronal avalanches may be misevaluated

    NASA Astrophysics Data System (ADS)

    Langlois, Dominic; Cousineau, Denis; Thivierge, J. P.

    2014-01-01

    The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.

  14. Gutenberg-Richter b-value maximum likelihood estimation and sample size

    NASA Astrophysics Data System (ADS)

    Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.

    2016-06-01

    The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.

  15. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    NASA Technical Reports Server (NTRS)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  16. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  17. Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Anissipour, Amir A.; Benson, Russell A.

    1989-01-01

    The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.

  18. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  19. Parallel computation of a maximum-likelihood estimator of a physical map.

    PubMed Central

    Bhandarkar, S M; Machaka, S A; Shete, S S; Kota, R N

    2001-01-01

    Reconstructing a physical map of a chromosome from a genomic library presents a central computational problem in genetics. Physical map reconstruction in the presence of errors is a problem of high computational complexity that provides the motivation for parallel computing. Parallelization strategies for a maximum-likelihood estimation-based approach to physical map reconstruction are presented. The estimation procedure entails a gradient descent search for determining the optimal spacings between probes for a given probe ordering. The optimal probe ordering is determined using a stochastic optimization algorithm such as simulated annealing or microcanonical annealing. A two-level parallelization strategy is proposed wherein the gradient descent search is parallelized at the lower level and the stochastic optimization algorithm is simultaneously parallelized at the higher level. Implementation and experimental results on a distributed-memory multiprocessor cluster running the parallel virtual machine (PVM) environment are presented using simulated and real hybridization data. PMID:11238392

  20. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W., Jr.

    2003-01-01

    A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.

  1. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. PMID:23579098

  2. An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator

    PubMed Central

    Galili, Tal; Meilijson, Isaac

    2016-01-01

    The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.

  3. Targeted Maximum Likelihood Estimation for Dynamic and Static Longitudinal Marginal Structural Working Models

    PubMed Central

    Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark

    2015-01-01

    This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the

  4. Correlation structure and variable selection in generalized estimating equations via composite likelihood information criteria.

    PubMed

    Nikoloulopoulos, Aristidis K

    2016-06-30

    The method of generalized estimating equations (GEE) is popular in the biostatistics literature for analyzing longitudinal binary and count data. It assumes a generalized linear model for the outcome variable, and a working correlation among repeated measurements. In this paper, we introduce a viable competitor: the weighted scores method for generalized linear model margins. We weight the univariate score equations using a working discretized multivariate normal model that is a proper multivariate model. Because the weighted scores method is a parametric method based on likelihood, we propose composite likelihood information criteria as an intermediate step for model selection. The same criteria can be used for both correlation structure and variable selection. Simulations studies and the application example show that our method outperforms other existing model selection methods in GEE. From the example, it can be seen that our methods not only improve on GEE in terms of interpretability and efficiency but also can change the inferential conclusions with respect to GEE. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26822854

  5. A method for modeling bias in a person's estimates of likelihoods of events

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.; Morera, Osvaldo

    1988-01-01

    It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.

  6. Time domain maximum likelihood parameter estimation in LISA Pathfinder data analysis

    NASA Astrophysics Data System (ADS)

    Congedo, G.; Ferraioli, L.; Hueller, M.; De Marchi, F.; Vitale, S.; Armano, M.; Hewitson, M.; Nofrarias, M.

    2012-06-01

    LISA is the upcoming space-based gravitational-wave detector. LISA Pathfinder, to be launched in the coming years, will be the in-flight test of the LISA arm, with a hardware (control scheme, sensors, and actuators) identical in design to LISA. LISA Pathfinder will collect a picture of all noise disturbances possibly affecting LISA, achieving the unprecedented pureness of geodesic motion of test masses necessary for the detection of gravitational waves. The first steps of both missions will crucially depend on a very precise calibration of the key system parameters. Moreover, robust parameters estimation has a fundamental importance in the correct assessment of the residual acceleration noise between the test masses, an essential part of the data preprocessing for LISA. In this paper, we present a maximum likelihood parameter estimation technique in time domain employed for system identification, being devised for this calibration, and show its proficiency on simulated data and validation through Monte Carlo realizations of independent noise runs. We discuss its robustness to nonstandard scenarios possibly arising during the real mission. Furthermore, we apply the same technique to data produced in missionlike fashion during operational exercises with a realistic simulator provided by European Space Agency. The result of the investigation is that parameter estimation is mandatory to avoid systematic errors in the estimated differential acceleration noise.

  7. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Saatci, Esra; Akan, Aydin

    2010-12-01

    We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  8. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents

    PubMed Central

    Stepanyuk, Andrey; Borisyuk, Anya; Belan, Pavel

    2014-01-01

    Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter, and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment. PMID:25324721

  9. Maximum likelihood estimation and the multivariate Bernoulli distribution: An application to reliability

    SciTech Connect

    Kvam, P.H.

    1994-08-01

    We investigate systems designed using redundant component configurations. If external events exist in the working environment that cause two or more components in the system to fail within the same demand period, the designed redundancy in the system can be quickly nullified. In the engineering field, such events are called common cause failures (CCFs), and are primary factors in some risk assessments. If CCFs have positive probability, but are not addressed in the analysis, the assessment may contain a gross overestimation of the system reliability. We apply a discrete, multivariate shock model for a parallel system of two or more components, allowing for positive probability that such external events can occur. The methods derived are motivated by attribute data for emergency diesel generators from various US nuclear power plants. Closed form solutions for maximum likelihood estimators exist in many cases; statistical tests and confidence intervals are discussed for the different test environments considered.

  10. Maximum-Likelihood Tree Estimation Using Codon Substitution Models with Multiple Partitions

    PubMed Central

    Zoller, Stefan; Boskova, Veronika; Anisimova, Maria

    2015-01-01

    Many protein sequences have distinct domains that evolve with different rates, different selective pressures, or may differ in codon bias. Instead of modeling these differences by more and more complex models of molecular evolution, we present a multipartition approach that allows maximum-likelihood phylogeny inference using different codon models at predefined partitions in the data. Partition models can, but do not have to, share free parameters in the estimation process. We test this approach with simulated data as well as in a phylogenetic study of the origin of the leucin-rich repeat regions in the type III effector proteins of the pythopathogenic bacteria Ralstonia solanacearum. Our study does not only show that a simple two-partition model resolves the phylogeny better than a one-partition model but also gives more evidence supporting the hypothesis of lateral gene transfer events between the bacterial pathogens and its eukaryotic hosts. PMID:25911229

  11. Parsimonious estimation of sex-specific map distances by stepwise maximum likelihood regression

    SciTech Connect

    Fann, C.S.J.; Ott, J.

    1995-10-10

    In human genetic maps, differences between female (x{sub f}) and male (x{sub m}) map distances may be characterized by the ratio, R = x{sub f}/x{sub m}, or the relative difference, Q = (x{sub f} - x{sub m})/(x{sub f} + x{sub m}) = (R - 1)/(R + 1). For a map of genetic markers spread along a chromosome, Q(d) may be viewed as a graph of Q versus the midpoints, d, of the map intervals. To estimate male and female map distances for each interval, a novel method is proposed to evaluate the most parsimonious trend of Q(d) along the chromosome, where Q(d) is expressed as a polynomial in d. Stepwise maximum likelihood polynomial regression of Q is described. The procedure has been implemented in a FORTRAN program package, TREND, and is applied to data on chromosome 18. 11 refs., 2 figs., 3 tabs.

  12. CodonPhyML: Fast Maximum Likelihood Phylogeny Estimation under Codon Substitution Models

    PubMed Central

    Gil, Manuel; Zoller, Stefan; Anisimova, Maria

    2013-01-01

    Markov models of codon substitution naturally incorporate the structure of the genetic code and the selection intensity at the protein level, providing a more realistic representation of protein-coding sequences compared with nucleotide or amino acid models. Thus, for protein-coding genes, phylogenetic inference is expected to be more accurate under codon models. So far, phylogeny reconstruction under codon models has been elusive due to computational difficulties of dealing with high dimension matrices. Here, we present a fast maximum likelihood (ML) package for phylogenetic inference, CodonPhyML offering hundreds of different codon models, the largest variety to date, for phylogeny inference by ML. CodonPhyML is tested on simulated and real data and is shown to offer excellent speed and convergence properties. In addition, CodonPhyML includes most recent fast methods for estimating phylogenetic branch supports and provides an integral framework for models selection, including amino acid and DNA models. PMID:23436912

  13. On maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions.

    PubMed

    Hornik, Kurt; Grün, Bettina

    2014-01-01

    Maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions involves inverting the ratio [Formula: see text] of modified Bessel functions and computational methods are required to invert these functions using approximative or iterative algorithms. In this paper we use Amos-type bounds for [Formula: see text] to deduce sharper bounds for the inverse function, determine the approximation error of these bounds, and use these to propose a new approximation for which the error tends to zero when the inverse of [Formula: see text] is evaluated at values tending to [Formula: see text] (from the left). We show that previously introduced rational bounds for [Formula: see text] which are invertible using quadratic equations cannot be used to improve these bounds. PMID:25309045

  14. Raw Data Maximum Likelihood Estimation for Common Principal Component Models: A State Space Approach.

    PubMed

    Gu, Fei; Wu, Hao

    2016-09-01

    The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end. PMID:27364333

  15. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    DOE PAGESBeta

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; Simonson, Katherine Mary

    2016-01-11

    In past research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less

  16. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2002-01-01

    A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from

  17. The Benefits of Maximum Likelihood Estimators in Predicting Bulk Permeability and Upscaling Fracture Networks

    NASA Astrophysics Data System (ADS)

    Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca

    2016-04-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed

  18. Step change point estimation in the multivariate-attribute process variability using artificial neural networks and maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Maleki, Mohammad Reza; Amiri, Amirhossein; Mousavi, Seyed Meysam

    2015-07-01

    In some statistical process control applications, the combination of both variable and attribute quality characteristics which are correlated represents the quality of the product or the process. In such processes, identification the time of manifesting the out-of-control states can help the quality engineers to eliminate the assignable causes through proper corrective actions. In this paper, first we use an artificial neural network (ANN)-based method in the literature for detecting the variance shifts as well as diagnosing the sources of variation in the multivariate-attribute processes. Then, based on the quality characteristics responsible for the out-of-control state, we propose a modular model based on the ANN for estimating the time of step change in the multivariate-attribute process variability. We also compare the performance of the ANN-based estimator with the estimator based on maximum likelihood method (MLE). A numerical example based on simulation study is used to evaluate the performance of the estimators in terms of the accuracy and precision criteria. The results of the simulation study show that the proposed ANN-based estimator outperforms the MLE estimator under different out-of-control scenarios where different shift magnitudes in the covariance matrix of multivariate-attribute quality characteristics are manifested.

  19. Two-Locus Likelihoods Under Variable Population Size and Fine-Scale Recombination Rate Estimation.

    PubMed

    Kamm, John A; Spence, Jeffrey P; Chan, Jeffrey; Song, Yun S

    2016-07-01

    Two-locus sampling probabilities have played a central role in devising an efficient composite-likelihood method for estimating fine-scale recombination rates. Due to mathematical and computational challenges, these sampling probabilities are typically computed under the unrealistic assumption of a constant population size, and simulation studies have shown that resulting recombination rate estimates can be severely biased in certain cases of historical population size changes. To alleviate this problem, we develop here new methods to compute the sampling probability for variable population size functions that are piecewise constant. Our main theoretical result, implemented in a new software package called LDpop, is a novel formula for the sampling probability that can be evaluated by numerically exponentiating a large but sparse matrix. This formula can handle moderate sample sizes ([Formula: see text]) and demographic size histories with a large number of epochs ([Formula: see text]). In addition, LDpop implements an approximate formula for the sampling probability that is reasonably accurate and scales to hundreds in sample size ([Formula: see text]). Finally, LDpop includes an importance sampler for the posterior distribution of two-locus genealogies, based on a new result for the optimal proposal distribution in the variable-size setting. Using our methods, we study how a sharp population bottleneck followed by rapid growth affects the correlation between partially linked sites. Then, through an extensive simulation study, we show that accounting for population size changes under such a demographic model leads to substantial improvements in fine-scale recombination rate estimation. PMID:27182948

  20. A maximum likelihood approach to jointly estimating seasonal and annual flood frequency distributions

    NASA Astrophysics Data System (ADS)

    Baratti, E.; Montanari, A.; Castellarin, A.; Salinas, J. L.; Viglione, A.; Blöschl, G.

    2012-04-01

    Flood frequency analysis is often used by practitioners to support the design of river engineering works, flood miti- gation procedures and civil protection strategies. It is often carried out at annual time scale, by fitting observations of annual maximum peak flows. However, in many cases one is also interested in inferring the flood frequency distribution for given intra-annual periods, for instance when one needs to estimate the risk of flood in different seasons. Such information is needed, for instance, when planning the schedule of river engineering works whose building area is in close proximity to the river bed for several months. A key issue in seasonal flood frequency analysis is to ensure the compatibility between intra-annual and annual flood probability distributions. We propose an approach to jointly estimate the parameters of seasonal and annual probability distribution of floods. The approach is based on the preliminary identification of an optimal number of seasons within the year,which is carried out by analysing the timing of flood flows. Then, parameters of intra-annual and annual flood distributions are jointly estimated by using (a) an approximate optimisation technique and (b) a formal maximum likelihood approach. The proposed methodology is applied to some case studies for which extended hydrological information is available at annual and seasonal scale.

  1. A likelihood estimation of HIV incidence incorporating information on past prevalence

    PubMed Central

    Gabaitiri, Lesego; Mwambi, Henry G.; Lagakos, Stephen W.; Pagano, Marcello

    2014-01-01

    SUMMARY The prevalence and incidence of an epidemic are basic characteristics that are essential for monitoring its impact, determining public health priorities, assessing the effect of interventions, and for planning purposes. A direct approach for estimating incidence is to undertake a longitudinal cohort study where a representative sample of disease free individuals are followed for a specified period of time and new cases of infection are observed and recorded. This approach is expensive, time consuming and prone to bias due to loss-to-follow-up. An alternative approach is to estimate incidence from cross sectional surveys using biomarkers to identify persons recently infected as in (Brookmeyer and Quinn, 1995; Janssen et al., 1998). This paper builds on the work of Janssen et al. (1998) and extends the theoretical framework proposed by Balasubramanian and Lagakos (2010) by incorporating information on past prevalence and deriving maximum likelihood estimators of incidence. The performance of the proposed method is evaluated through a simulation study, and its use is illustrated using data from the Botswana AIDS Impact (BAIS) III survey of 2008. PMID:25197147

  2. A likelihood framework for joint estimation of salmon abundance and migratory timing using telemetric mark-recapture

    USGS Publications Warehouse

    Bromaghin, Jeffrey; Gates, Kenneth S.; Palmer, Douglas E.

    2010-01-01

    Many fisheries for Pacific salmon Oncorhynchus spp. are actively managed to meet escapement goal objectives. In fisheries where the demand for surplus production is high, an extensive assessment program is needed to achieve the opposing objectives of allowing adequate escapement and fully exploiting the available surplus. Knowledge of abundance is a critical element of such assessment programs. Abundance estimation using mark—recapture experiments in combination with telemetry has become common in recent years, particularly within Alaskan river systems. Fish are typically captured and marked in the lower river while migrating in aggregations of individuals from multiple populations. Recapture data are obtained using telemetry receivers that are co-located with abundance assessment projects near spawning areas, which provide large sample sizes and information on population-specific mark rates. When recapture data are obtained from multiple populations, unequal mark rates may reflect a violation of the assumption of homogeneous capture probabilities. A common analytical strategy is to test the hypothesis that mark rates are homogeneous and combine all recapture data if the test is not significant. However, mark rates are often low, and a test of homogeneity may lack sufficient power to detect meaningful differences among populations. In addition, differences among mark rates may provide information that could be exploited during parameter estimation. We present a temporally stratified mark—recapture model that permits capture probabilities and migratory timing through the capture area to vary among strata. Abundance information obtained from a subset of populations after the populations have segregated for spawning is jointly modeled with telemetry distribution data by use of a likelihood function. Maximization of the likelihood produces estimates of the abundance and timing of individual populations migrating through the capture area, thus yielding

  3. The Likelihood Function and Likelihood Statistics

    NASA Astrophysics Data System (ADS)

    Robinson, Edward L.

    2016-01-01

    The likelihood function is a necessary component of Bayesian statistics but not of frequentist statistics. The likelihood function can, however, serve as the foundation for an attractive variant of frequentist statistics sometimes called likelihood statistics. We will first discuss the definition and meaning of the likelihood function, giving some examples of its use and abuse - most notably in the so-called prosecutor's fallacy. Maximum likelihood estimation is the aspect of likelihood statistics familiar to most people. When data points are known to have Gaussian probability distributions, maximum likelihood parameter estimation leads directly to least-squares estimation. When the data points have non-Gaussian distributions, least-squares estimation is no longer appropriate. We will show how the maximum likelihood principle leads to logical alternatives to least squares estimation for non-Gaussian distributions, taking the Poisson distribution as an example.The likelihood ratio is the ratio of the likelihoods of, for example, two hypotheses or two parameters. Likelihood ratios can be treated much like un-normalized probability distributions, greatly extending the applicability and utility of likelihood statistics. Likelihood ratios are prone to the same complexities that afflict posterior probability distributions in Bayesian statistics. We will show how meaningful information can be extracted from likelihood ratios by the Laplace approximation, by marginalizing, or by Markov chain Monte Carlo sampling.

  4. Semiparametric Estimation of the Impacts of Longitudinal Interventions on Adolescent Obesity using Targeted Maximum-Likelihood: Accessible Estimation with the ltmle Package

    PubMed Central

    Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.

    2015-01-01

    While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated

  5. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  6. Maximum-likelihood q-estimator uncovers the role of potassium at neuromuscular junctions.

    PubMed

    da Silva, A J; Trindade, M A S; Santos, D O C; Lima, R F

    2016-02-01

    Recently, we demonstrated the existence of nonextensive behavior in neuromuscular transmission (da Silva et al. in Phys Rev E 84:041925, 2011). In this letter, we first obtain a maximum-likelihood q-estimator to calculate the scale factor ([Formula: see text]) and the q-index of q-Gaussian distributions. Next, we use the indexes to analyze spontaneous miniature end plate potentials in electrophysiological recordings from neuromuscular junctions. These calculations were performed assuming both normal and high extracellular potassium concentrations [Formula: see text]. This protocol was used to test the validity of Tsallis statistics under electrophysiological conditions closely resembling physiological stimuli. The analysis shows that q-indexes are distinct depending on the extracellular potassium concentration. Our letter provides a general way to obtain the best estimate of parameters from a q-Gaussian distribution function. It also expands the validity of Tsallis statistics in realistic physiological stimulus conditions. In addition, we discuss the physical and physiological implications of these findings. PMID:26721559

  7. Introducing AN Interpolation Method to Efficiently Implement AN Approximate Maximum Likelihood Estimator for the Hurst Exponent

    NASA Astrophysics Data System (ADS)

    Chang, Yen-Ching

    2015-10-01

    The efficiency and accuracy of estimating the Hurst exponent have been two inevitable considerations. Recently, an efficient implementation of the maximum likelihood estimator (MLE) (simply called the fast MLE) for the Hurst exponent was proposed based on a combination of the Levinson algorithm and Cholesky decomposition, and furthermore the fast MLE has also considered all four possible cases, including known mean, unknown mean, known variance, and unknown variance. In this paper, four cases of an approximate MLE (AMLE) were obtained based on two approximations of the logarithmic determinant and the inverse of a covariance matrix. The computational cost of the AMLE is much lower than that of the MLE, but a little higher than that of the fast MLE. To raise the computational efficiency of the proposed AMLE, a required power spectral density (PSD) was indirectly calculated by interpolating two suitable PSDs chosen from a set of established PSDs. Experimental results show that the AMLE through interpolation (simply called the interpolating AMLE) can speed up computation. The computational speed of the interpolating AMLE is on average over 24 times quicker than that of the fast MLE while remaining the accuracy very close to that of the MLE or the fast MLE.

  8. Maximum Likelihood Estimation of the Broken Power Law Spectral Parameters with Detector Design Applications

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W.

    2002-01-01

    The method of Maximum Likelihood (ML) is used to estimate the spectral parameters of an assumed broken power law energy spectrum from simulated detector responses. This methodology, which requires the complete specificity of all cosmic-ray detector design parameters, is shown to provide approximately unbiased, minimum variance, and normally distributed spectra information for events detected by an instrument having a wide range of commonly used detector response functions. The ML procedure, coupled with the simulated performance of a proposed space-based detector and its planned life cycle, has proved to be of significant value in the design phase of a new science instrument. The procedure helped make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope. This ML methodology is then generalized to estimate broken power law spectral parameters from real cosmic-ray data sets.

  9. Maximum penalized likelihood estimation in semiparametric mark-recapture-recovery models.

    PubMed

    Michelot, Théo; Langrock, Roland; Kneib, Thomas; King, Ruth

    2016-01-01

    We discuss the semiparametric modeling of mark-recapture-recovery data where the temporal and/or individual variation of model parameters is explained via covariates. Typically, in such analyses a fixed (or mixed) effects parametric model is specified for the relationship between the model parameters and the covariates of interest. In this paper, we discuss the modeling of the relationship via the use of penalized splines, to allow for considerably more flexible functional forms. Corresponding models can be fitted via numerical maximum penalized likelihood estimation, employing cross-validation to choose the smoothing parameters in a data-driven way. Our contribution builds on and extends the existing literature, providing a unified inferential framework for semiparametric mark-recapture-recovery models for open populations, where the interest typically lies in the estimation of survival probabilities. The approach is applied to two real datasets, corresponding to gray herons (Ardea cinerea), where we model the survival probability as a function of environmental condition (a time-varying global covariate), and Soay sheep (Ovis aries), where we model the survival probability as a function of individual weight (a time-varying individual-specific covariate). The proposed semiparametric approach is compared to a standard parametric (logistic) regression and new interesting underlying dynamics are observed in both cases. PMID:26289495

  10. List-Mode Likelihood: EM Algorithm and Image Quality Estimation Demonstrated on 2-D PET

    PubMed Central

    Barrett, Harrison H.

    2010-01-01

    Using a theory of list-mode maximum-likelihood (ML) source reconstruction presented recently by Barrett et al. [1], this paper formulates a corresponding expectation-maximization (EM) algorithm, as well as a method for estimating noise properties at the ML estimate. List-mode ML is of interest in cases where the dimensionality of the measurement space impedes a binning of the measurement data. It can be advantageous in cases where a better forward model can be obtained by including more measurement coordinates provided by a given detector. Different figures of merit for the detector performance can be computed from the Fisher information matrix (FIM). This paper uses the observed FIM, which requires a single data set, thus, avoiding costly ensemble statistics. The proposed techniques are demonstrated for an idealized two-dimensional (2-D) positron emission tomography (PET) [2-D PET] detector. We compute from simulation data the improved image quality obtained by including the time of flight of the coincident quanta. PMID:9688154

  11. Maximum likelihood estimation of vehicle position for outdoor image sensor-based visible light positioning system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiang; Lin, Jiming

    2016-04-01

    Image sensor-based visible light positioning can be applied not only to indoor environments but also to outdoor environments. To determine the performance bounds of the positioning accuracy from the view of statistical optimization for an outdoor image sensor-based visible light positioning system, we analyze and derive the maximum likelihood estimation and corresponding Cramér-Rao lower bounds of vehicle position, under the condition that the observation values of the light-emitting diode (LED) imaging points are affected by white Gaussian noise. For typical parameters of an LED traffic light and in-vehicle camera image sensor, simulation results show that accurate estimates are available, with positioning error generally less than 0.1 m at a communication distance of 30 m between the LED array transmitter and the camera receiver. With the communication distance being constant, the positioning accuracy depends on the number of LEDs used, the focal length of the lens, the pixel size, and the frame rate of the camera receiver.

  12. Qualitative release assessment to estimate the likelihood of henipavirus entering the United Kingdom.

    PubMed

    Snary, Emma L; Ramnial, Vick; Breed, Andrew C; Stephenson, Ben; Field, Hume E; Fooks, Anthony R

    2012-01-01

    The genus Henipavirus includes Hendra virus (HeV) and Nipah virus (NiV), for which fruit bats (particularly those of the genus Pteropus) are considered to be the wildlife reservoir. The recognition of henipaviruses occurring across a wider geographic and host range suggests the possibility of the virus entering the United Kingdom (UK). To estimate the likelihood of henipaviruses entering the UK, a qualitative release assessment was undertaken. To facilitate the release assessment, the world was divided into four zones according to location of outbreaks of henipaviruses, isolation of henipaviruses, proximity to other countries where incidents of henipaviruses have occurred and the distribution of Pteropus spp. fruit bats. From this release assessment, the key findings are that the importation of fruit from Zone 1 and 2 and bat bushmeat from Zone 1 each have a Low annual probability of release of henipaviruses into the UK. Similarly, the importation of bat meat from Zone 2, horses and companion animals from Zone 1 and people travelling from Zone 1 and entering the UK was estimated to pose a Very Low probability of release. The annual probability of release for all other release routes was assessed to be Negligible. It is recommended that the release assessment be periodically re-assessed to reflect changes in knowledge and circumstances over time. PMID:22328916

  13. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    SciTech Connect

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  14. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  15. Inbreeding of bottlenecked butterfly populations. Estimation using the likelihood of changes in marker allele frequencies.

    PubMed Central

    Saccheri, I J; Wilson, I J; Nichols, R A; Bruford, M W; Brakefield, P M

    1999-01-01

    Polymorphic enzyme and minisatellite loci were used to estimate the degree of inbreeding in experimentally bottlenecked populations of the butterfly, Bicyclus anynana (Satyridae), three generations after founding events of 2, 6, 20, or 300 individuals, each bottleneck size being replicated at least four times. Heterozygosity fell more than expected, though not significantly so, but this traditional measure of the degree of inbreeding did not make full use of the information from genetic markers. It proved more informative to estimate directly the probability distribution of a measure of inbreeding, sigma2, the variance in the number of descendants left per gene. In all bottlenecked lines, sigma2 was significantly larger than in control lines (300 founders). We demonstrate that this excess inbreeding was brought about both by an increase in the variance of reproductive success of individuals, but also by another process. We argue that in bottlenecked lines linkage disequilibrium generated by the small number of haplotypes passing through the bottleneck resulted in hitchhiking of particular marker alleles with those haplotypes favored by selection. In control lines, linkage disequilibrium was minimal. Our result, indicating more inbreeding than expected from demographic parameters, contrasts with the findings of previous (Drosophila) experiments in which the decline in observed heterozygosity was slower than expected and attributed to associative overdominance. The different outcomes may both be explained as a consequence of linkage disequilibrium under different regimes of inbreeding. The likelihood-based method to estimate inbreeding should be of wide applicability. It was, for example, able to resolve small differences in sigma2 among replicate lines within bottleneck-size treatments, which could be related to the observed variation in reproductive viability. PMID:10049922

  16. Gay men's estimates of the likelihood of HIV transmission in sexual behaviours.

    PubMed

    Gold, R S; Skinner, M J

    2001-04-01

    In 3 studies we recorded gay men's estimates of the likelihood that HIV would be transmitted in various sexual behaviours. In Study 1 (data collected 1993, n=92), the men were found to believe that transmissibility is very much greater than it actually is; that insertive unprotected anal intercourse (UAI) by an HIV-infected partner is made safer by withdrawal before ejaculation, and very much safer by withdrawal before either ejaculation or pre-ejaculation; that UAI is very much safer when an infected partner is receptive rather than insertive; that insertive oral sex by an infected partner is much less risky than even the safest variant of UAI; that HIV is less transmissible very early after infection than later on; and that risk accumulates over repeated acts of UAI less than it actually does. In Study 2 (data collected 1997/8, n=200), it was found that younger and older uninfected men generally gave similar estimates of transmissibility, but that infected men gave somewhat lower estimates than uninfected men; and that estimates were unaffected by asking the men to imagine that they themselves, rather than a hypothetical other gay man, were engaging in the behaviours. Comparison of the 1993 and 1997/8 results suggested that there had been some effect of an educational campaign warning of the dangers of withdrawal; however, there had been no effect either of a campaign warning of the dangers of receptive UAI by an infected partner, or of publicity given to the greater transmissibility of HIV shortly after infection. In Study 3 (data collected 1999, n=59), men induced into a positive mood were found to give lower estimates of transmissibility than either men induced into a neutral mood or men induced into a negative mood. It is argued that the results reveal the important contribution made to gay men's transmissibility estimates by cognitive strategies (such as the 'availability heuristic' and 'anchoring and adjustment') known to be general characteristics of human

  17. Maximum likelihood phylogenetic estimation from DNA sequences with variable rates over sites: approximate methods.

    PubMed

    Yang, Z

    1994-09-01

    Two approximate methods are proposed for maximum likelihood phylogenetic estimation, which allow variable rates of substitution across nucleotide sites. Three data sets with quite different characteristics were analyzed to examine empirically the performance of these methods. The first, called the "discrete gamma model," uses several categories of rates to approximate the gamma distribution, with equal probability for each category. The mean of each category is used to represent all the rates falling in the category. The performance of this method is found to be quite good, and four such categories appear to be sufficient to produce both an optimum, or near-optimum fit by the model to the data, and also an acceptable approximation to the continuous distribution. The second method, called "fixed-rates model", classifies sites into several classes according to their rates predicted assuming the star tree. Sites in different classes are then assumed to be evolving at these fixed rates when other tree topologies are evaluated. Analyses of the data sets suggest that this method can produce reasonable results, but it seems to share some properties of a least-squares pairwise comparison; for example, interior branch lengths in nonbest trees are often found to be zero. The computational requirements of the two methods are comparable to that of Felsenstein's (1981, J Mol Evol 17:368-376) model, which assumes a single rate for all the sites. PMID:7932792

  18. Maximum Likelihood Estimation of Spectra Information from Multiple Independent Astrophysics Data Sets

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)

    2002-01-01

    The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.

  19. Statistical analysis of maximum likelihood estimator images of human brain FDG PET studies

    SciTech Connect

    Llacer, J.; Veklerov, E. ); Hoffman, E.J. . Dept. of Radiological Sciences); Nunez, J. , Facultat de Fisica); Coakley, K.J.

    1993-06-01

    The work presented in this paper evaluates the statistical characteristics of regional bias and expected error in reconstructions of real PET data of human brain fluorodeoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task that the authors have investigated is that of quantifying radioisotope uptake in regions-of-interest (ROI's). They first describe a robust methodology for the use of the MLE method with clinical data which contains only one adjustable parameter: the kernel size for a Gaussian filtering operation that determines final resolution and expected regional error. Simulation results are used to establish the fundamental characteristics of the reconstructions obtained by out methodology, corresponding to the case in which the transition matrix is perfectly known. Then, data from 72 independent human brain FDG scans from four patients are used to show that the results obtained from real data are consistent with the simulation, although the quality of the data and of the transition matrix have an effect on the final outcome.

  20. Validating new diagnostic imaging criteria for primary progressive aphasia via anatomical likelihood estimation meta-analyses.

    PubMed

    Bisenius, S; Neumann, J; Schroeter, M L

    2016-04-01

    Recently, diagnostic clinical and imaging criteria for primary progressive aphasia (PPA) have been revised by an international consortium (Gorno-Tempini et al. Neurology 2011;76:1006-14). The aim of this study was to validate the specificity of the new imaging criteria and investigate whether different imaging modalities [magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET)] require different diagnostic subtype-specific imaging criteria. Anatomical likelihood estimation meta-analyses were conducted for PPA subtypes across a large cohort of 396 patients: firstly, across MRI studies for each of the three PPA subtypes followed by conjunction and subtraction analyses to investigate the specificity, and, secondly, by comparing results across MRI vs. FDG-PET studies in semantic dementia and progressive nonfluent aphasia. Semantic dementia showed atrophy in temporal, fusiform, parahippocampal gyri, hippocampus, and amygdala, progressive nonfluent aphasia in left putamen, insula, middle/superior temporal, precentral, and frontal gyri, logopenic progressive aphasia in middle/superior temporal, supramarginal, and dorsal posterior cingulate gyri. Results of the disease-specific meta-analyses across MRI studies were disjunct. Similarly, atrophic and hypometabolic brain networks were regionally dissociated in both semantic dementia and progressive nonfluent aphasia. In conclusion, meta-analyses support the specificity of new diagnostic imaging criteria for PPA and suggest that they should be specified for each imaging modality separately. PMID:26901360

  1. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm

    PubMed Central

    Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  2. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm.

    PubMed

    Hesterman, Jacob Y; Caucci, Luca; Kupinski, Matthew A; Barrett, Harrison H; Furenlid, Lars R

    2010-06-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  3. The early maximum likelihood estimation model of audiovisual integration in speech perception.

    PubMed

    Andersen, Tobias S

    2015-05-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP. PMID:25994715

  4. Task-based detectability in CT image reconstruction by filtered backprojection and penalized likelihood estimation

    SciTech Connect

    Gang, Grace J.; Stayman, J. Webster; Zbijewski, Wojciech; Siewerdsen, Jeffrey H.

    2014-08-15

    Purpose: Nonstationarity is an important aspect of imaging performance in CT and cone-beam CT (CBCT), especially for systems employing iterative reconstruction. This work presents a theoretical framework for both filtered-backprojection (FBP) and penalized-likelihood (PL) reconstruction that includes explicit descriptions of nonstationary noise, spatial resolution, and task-based detectability index. Potential utility of the model was demonstrated in the optimal selection of regularization parameters in PL reconstruction. Methods: Analytical models for local modulation transfer function (MTF) and noise-power spectrum (NPS) were investigated for both FBP and PL reconstruction, including explicit dependence on the object and spatial location. For FBP, a cascaded systems analysis framework was adapted to account for nonstationarity by separately calculating fluence and system gains for each ray passing through any given voxel. For PL, the point-spread function and covariance were derived using the implicit function theorem and first-order Taylor expansion according toFessler [“Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography,” IEEE Trans. Image Process. 5(3), 493–506 (1996)]. Detectability index was calculated for a variety of simple tasks. The model for PL was used in selecting the regularization strength parameter to optimize task-based performance, with both a constant and a spatially varying regularization map. Results: Theoretical models of FBP and PL were validated in 2D simulated fan-beam data and found to yield accurate predictions of local MTF and NPS as a function of the object and the spatial location. The NPS for both FBP and PL exhibit similar anisotropic nature depending on the pathlength (and therefore, the object and spatial location within the object) traversed by each ray, with the PL NPS experiencing greater smoothing along directions with higher noise. The MTF of FBP

  5. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach

    PubMed Central

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations. PMID:26474313

  6. Theoretic Fit and Empirical Fit: The Performance of Maximum Likelihood versus Generalized Least Squares Estimation in Structural Equation Models.

    ERIC Educational Resources Information Center

    Olsson, Ulf Henning; Troye, Sigurd Villads; Howell, Roy D.

    1999-01-01

    Used simulation to compare the ability of maximum likelihood (ML) and generalized least-squares (GLS) estimation to provide theoretic fit in models that are parsimonious representations of a true model. The better empirical fit obtained for GLS, compared with ML, was obtained at the cost of lower theoretic fit. (Author/SLD)

  7. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  8. Efficient Full Information Maximum Likelihood Estimation for Multidimensional IRT Models. Research Report. ETS RR-09-03

    ERIC Educational Resources Information Center

    Rijmen, Frank

    2009-01-01

    Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…

  9. Marginal Maximum Likelihood Estimation for Three-Parameter Polychotomous Item Response Models: Application of an EM Algorithm.

    ERIC Educational Resources Information Center

    Muraki, Eiji

    This study examines the application of the marginal maximum likelihood (MML) EM algorithm to the parameter estimation problem of the three-parameter normal ogive and logistic polychotomous item response models. A three-parameter normal ogive model, the Graded Response model, has been developed on the basis of Samejima's two-parameter graded…

  10. Bayesian Analysis Using a Simple Likelihood Model Outperforms Parsimony for Estimation of Phylogeny from Discrete Morphological Data

    PubMed Central

    Wright, April M.; Hillis, David M.

    2014-01-01

    Despite the introduction of likelihood-based methods for estimating phylogenetic trees from phenotypic data, parsimony remains the most widely-used optimality criterion for building trees from discrete morphological data. However, it has been known for decades that there are regions of solution space in which parsimony is a poor estimator of tree topology. Numerous software implementations of likelihood-based models for the estimation of phylogeny from discrete morphological data exist, especially for the Mk model of discrete character evolution. Here we explore the efficacy of Bayesian estimation of phylogeny, using the Mk model, under conditions that are commonly encountered in paleontological studies. Using simulated data, we describe the relative performances of parsimony and the Mk model under a range of realistic conditions that include common scenarios of missing data and rate heterogeneity. PMID:25279853

  11. Inter-bit prediction based on maximum likelihood estimate for distributed video coding

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin; Huchet, Grégory

    2010-01-01

    Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.

  12. Anatomical likelihood estimation meta-analysis of grey and white matter anomalies in autism spectrum disorders

    PubMed Central

    DeRamus, Thomas P.; Kana, Rajesh K.

    2014-01-01

    Autism spectrum disorders (ASD) are characterized by impairments in social communication and restrictive, repetitive behaviors. While behavioral symptoms are well-documented, investigations into the neurobiological underpinnings of ASD have not resulted in firm biomarkers. Variability in findings across structural neuroimaging studies has contributed to difficulty in reliably characterizing the brain morphology of individuals with ASD. These inconsistencies may also arise from the heterogeneity of ASD, and wider age-range of participants included in MRI studies and in previous meta-analyses. To address this, the current study used coordinate-based anatomical likelihood estimation (ALE) analysis of 21 voxel-based morphometry (VBM) studies examining high-functioning individuals with ASD, resulting in a meta-analysis of 1055 participants (506 ASD, and 549 typically developing individuals). Results consisted of grey, white, and global differences in cortical matter between the groups. Modeled anatomical maps consisting of concentration, thickness, and volume metrics of grey and white matter revealed clusters suggesting age-related decreases in grey and white matter in parietal and inferior temporal regions of the brain in ASD, and age-related increases in grey matter in frontal and anterior-temporal regions. White matter alterations included fiber tracts thought to play key roles in information processing and sensory integration. Many current theories of pathobiology ASD suggest that the brains of individuals with ASD may have less-functional long-range (anterior-to-posterior) connections. Our findings of decreased cortical matter in parietal–temporal and occipital regions, and thickening in frontal cortices in older adults with ASD may entail altered cortical anatomy, and neurodevelopmental adaptations. PMID:25844306

  13. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  14. Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.

    PubMed

    Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène

    2016-07-01

    Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific

  15. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    NASA Astrophysics Data System (ADS)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  16. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  17. Uncertainty estimation of end-member mixing using generalized likelihood uncertainty estimation (GLUE), applied in a lowland catchment

    NASA Astrophysics Data System (ADS)

    Delsman, Joost R.; Essink, Gualbert H. P. Oude; Beven, Keith J.; Stuyfzand, Pieter J.

    2013-08-01

    End-member mixing models have been widely used to separate the different components of a hydrograph, but their effectiveness suffers from uncertainty in both the identification of end-members and spatiotemporal variation in end-member concentrations. In this paper, we outline a procedure, based on the generalized likelihood uncertainty estimation (GLUE) framework, to more inclusively evaluate uncertainty in mixing models than existing approaches. We apply this procedure, referred to as G-EMMA, to a yearlong chemical data set from the heavily impacted agricultural Lissertocht catchment, Netherlands, and compare its results to the "traditional" end-member mixing analysis (EMMA). While the traditional approach appears unable to adequately deal with the large spatial variation in one of the end-members, the G-EMMA procedure successfully identified, with varying uncertainty, contributions of five different end-members to the stream. Our results suggest that the concentration distribution of "effective" end-members, that is, the flux-weighted input of an end-member to the stream, can differ markedly from that inferred from sampling of water stored in the catchment. Results also show that the uncertainty arising from identifying the correct end-members may alter calculated end-member contributions by up to 30%, stressing the importance of including the identification of end-members in the uncertainty assessment.

  18. dPIRPLE: A Joint Estimation Framework for Deformable Registration and Penalized-Likelihood CT Image Reconstruction using Prior Images

    PubMed Central

    Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.

    2014-01-01

    Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc.). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration

  19. Reconstruction of difference in sequential CT studies using penalized likelihood estimation.

    PubMed

    Pourmorteza, A; Dang, H; Siewerdsen, J H; Stayman, J W

    2016-03-01

    Characterization of anatomical change and other differences is important in sequential computed tomography (CT) imaging, where a high-fidelity patient-specific prior image is typically present, but is not used, in the reconstruction of subsequent anatomical states. Here, we introduce a penalized likelihood (PL) method called reconstruction of difference (RoD) to directly reconstruct a difference image volume using both the current projection data and the (unregistered) prior image integrated into the forward model for the measurement data. The algorithm utilizes an alternating minimization to find both the registration and reconstruction estimates. This formulation allows direct control over the image properties of the difference image, permitting regularization strategies that inhibit noise and structural differences due to inconsistencies between the prior image and the current data. Additionally, if the change is known to be local, RoD allows local acquisition and reconstruction, as opposed to traditional model-based approaches that require a full support field of view (or other modifications). We compared the performance of RoD to a standard PL algorithm, in simulation studies and using test-bench cone-beam CT data. The performances of local and global RoD approaches were similar, with local RoD providing a significant computational speedup. In comparison across a range of data with differing fidelity, the local RoD approach consistently showed lower error (with respect to a truth image) than PL in both noisy data and sparsely sampled projection scenarios. In a study of the prior image registration performance of RoD, a clinically reasonable capture ranges were demonstrated. Lastly, the registration algorithm had a broad capture range and the error for reconstruction of CT data was 35% and 20% less than filtered back-projection for RoD and PL, respectively. The RoD has potential for delivering high-quality difference images in a range of sequential clinical

  20. Reconstruction of difference in sequential CT studies using penalized likelihood estimation

    NASA Astrophysics Data System (ADS)

    Pourmorteza, A.; Dang, H.; Siewerdsen, J. H.; Stayman, J. W.

    2016-03-01

    Characterization of anatomical change and other differences is important in sequential computed tomography (CT) imaging, where a high-fidelity patient-specific prior image is typically present, but is not used, in the reconstruction of subsequent anatomical states. Here, we introduce a penalized likelihood (PL) method called reconstruction of difference (RoD) to directly reconstruct a difference image volume using both the current projection data and the (unregistered) prior image integrated into the forward model for the measurement data. The algorithm utilizes an alternating minimization to find both the registration and reconstruction estimates. This formulation allows direct control over the image properties of the difference image, permitting regularization strategies that inhibit noise and structural differences due to inconsistencies between the prior image and the current data. Additionally, if the change is known to be local, RoD allows local acquisition and reconstruction, as opposed to traditional model-based approaches that require a full support field of view (or other modifications). We compared the performance of RoD to a standard PL algorithm, in simulation studies and using test-bench cone-beam CT data. The performances of local and global RoD approaches were similar, with local RoD providing a significant computational speedup. In comparison across a range of data with differing fidelity, the local RoD approach consistently showed lower error (with respect to a truth image) than PL in both noisy data and sparsely sampled projection scenarios. In a study of the prior image registration performance of RoD, a clinically reasonable capture ranges were demonstrated. Lastly, the registration algorithm had a broad capture range and the error for reconstruction of CT data was 35% and 20% less than filtered back-projection for RoD and PL, respectively. The RoD has potential for delivering high-quality difference images in a range of sequential clinical

  1. Reconstruction of difference in sequential CT studies using penalized likelihood estimation

    PubMed Central

    Pourmorteza, A; Dang, H; Siewerdsen, J H; Stayman, J W

    2016-01-01

    Characterization of anatomical change and other differences is important in sequential computed tomography (CT) imaging, where a high-fidelity patient-specific prior image is typically present, but is not used, in the reconstruction of subsequent anatomical states. Here, we introduce a penalized likelihood (PL) method called reconstruction of difference (RoD) to directly reconstruct a difference image volume using both the current projection data and the (unregistered) prior image integrated into the forward model for the measurement data. The algorithm utilizes an alternating minimization to find both the registration and reconstruction estimates. This formulation allows direct control over the image properties of the difference image, permitting regularization strategies that inhibit noise and structural differences due to inconsistencies between the prior image and the current data.Additionally, if the change is known to be local, RoD allows local acquisition and reconstruction, as opposed to traditional model-based approaches that require a full support field of view (or other modifications). We compared the performance of RoD to a standard PL algorithm, in simulation studies and using test-bench cone-beam CT data. The performances of local and global RoD approaches were similar, with local RoD providing a significant computational speedup. In comparison across a range of data with differing fidelity, the local RoD approach consistently showed lower error (with respect to a truth image) than PL in both noisy data and sparsely sampled projection scenarios. In a study of the prior image registration performance of RoD, a clinically reasonable capture ranges were demonstrated. Lastly, the registration algorithm had a broad capture range and the error for reconstruction of CT data was 35% and 20% less than filtered back-projection for RoD and PL, respectively. The RoD has potential for delivering high-quality difference images in a range of sequential clinical

  2. Maximum-likelihood estimation of photon-number distribution from homodyne statistics

    NASA Astrophysics Data System (ADS)

    Banaszek, Konrad

    1998-06-01

    We present a method for reconstructing the photon-number distribution from the homodyne statistics based on maximization of the likelihood function derived from the exact statistical description of a homodyne experiment. This method incorporates in a natural way the physical constraints on the reconstructed quantities, and the compensation for the nonunit detection efficiency.

  3. IQ-TREE: A Fast and Effective Stochastic Algorithm for Estimating Maximum-Likelihood Phylogenies

    PubMed Central

    Nguyen, Lam-Tung; Schmidt, Heiko A.; von Haeseler, Arndt; Minh, Bui Quang

    2015-01-01

    Large phylogenomics data sets require fast tree inference methods, especially for maximum-likelihood (ML) phylogenies. Fast programs exist, but due to inherent heuristics to find optimal trees, it is not clear whether the best tree is found. Thus, there is need for additional approaches that employ different search strategies to find ML trees and that are at the same time as fast as currently available ML programs. We show that a combination of hill-climbing approaches and a stochastic perturbation method can be time-efficiently implemented. If we allow the same CPU time as RAxML and PhyML, then our software IQ-TREE found higher likelihoods between 62.2% and 87.1% of the studied alignments, thus efficiently exploring the tree-space. If we use the IQ-TREE stopping rule, RAxML and PhyML are faster in 75.7% and 47.1% of the DNA alignments and 42.2% and 100% of the protein alignments, respectively. However, the range of obtaining higher likelihoods with IQ-TREE improves to 73.3–97.1%. IQ-TREE is freely available at http://www.cibiv.at/software/iqtree. PMID:25371430

  4. Maximum-likelihood estimation of channel-dependent trial-to-trial variability of auditory evoked brain responses in MEG

    PubMed Central

    2014-01-01

    Background We propose a mathematical model for multichannel assessment of the trial-to-trial variability of auditory evoked brain responses in magnetoencephalography (MEG). Methods Following the work of de Munck et al., our approach is based on the maximum likelihood estimation and involves an approximation of the spatio-temporal covariance of the contaminating background noise by means of the Kronecker product of its spatial and temporal covariance matrices. Extending the work of de Munck et al., where the trial-to-trial variability of the responses was considered identical to all channels, we evaluate it for each individual channel. Results Simulations with two equivalent current dipoles (ECDs) with different trial-to-trial variability, one seeded in each of the auditory cortices, were used to study the applicability of the proposed methodology on the sensor level and revealed spatial selectivity of the trial-to-trial estimates. In addition, we simulated a scenario with neighboring ECDs, to show limitations of the method. We also present an illustrative example of the application of this methodology to real MEG data taken from an auditory experimental paradigm, where we found hemispheric lateralization of the habituation effect to multiple stimulus presentation. Conclusions The proposed algorithm is capable of reconstructing lateralization effects of the trial-to-trial variability of evoked responses, i.e. when an ECD of only one hemisphere habituates, whereas the activity of the other hemisphere is not subject to habituation. Hence, it may be a useful tool in paradigms that assume lateralization effects, like, e.g., those involving language processing. PMID:24939398

  5. A note on weighted likelihood and Jeffreys modal estimation of proficiency levels in polytomous item response models.

    PubMed

    Magis, David

    2015-03-01

    Warm (in Psychometrika, 54, 427-450, 1989) established the equivalence between the so-called Jeffreys modal and the weighted likelihood estimators of proficiency level with some dichotomous item response models. The purpose of this note is to extend this result to polytomous item response models. First, a general condition is derived to ensure the perfect equivalence between these two estimators. Second, it is shown that this condition is fulfilled by two broad classes of polytomous models including, among others, the partial credit, rating scale, graded response, and nominal response models. PMID:24282130

  6. Beyond the fisher-matrix formalism: exact sampling distributions of the maximum-likelihood estimator in gravitational-wave parameter estimation.

    PubMed

    Vallisneri, Michele

    2011-11-01

    Gravitational-wave astronomers often wish to characterize the expected parameter-estimation accuracy of future observations. The Fisher matrix provides a lower bound on the spread of the maximum-likelihood estimator across noise realizations, as well as the leading-order width of the posterior probability, but it is limited to high signal strengths often not realized in practice. By contrast, Monte Carlo Bayesian inference provides the full posterior for any signal strength, but it is too expensive to repeat for a representative set of noises. Here I describe an efficient semianalytical technique to map the exact sampling distribution of the maximum-likelihood estimator across noise realizations, for any signal strength. This technique can be applied to any estimation problem for signals in additive Gaussian noise. PMID:22181593

  7. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    SciTech Connect

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  8. Estimating epidemiological parameters for bovine tuberculosis in British cattle using a Bayesian partial-likelihood approach

    PubMed Central

    O'Hare, A.; Orton, R. J.; Bessell, P. R.; Kao, R. R.

    2014-01-01

    Fitting models with Bayesian likelihood-based parameter inference is becoming increasingly important in infectious disease epidemiology. Detailed datasets present the opportunity to identify subsets of these data that capture important characteristics of the underlying epidemiology. One such dataset describes the epidemic of bovine tuberculosis (bTB) in British cattle, which is also an important exemplar of a disease with a wildlife reservoir (the Eurasian badger). Here, we evaluate a set of nested dynamic models of bTB transmission, including individual- and herd-level transmission heterogeneity and assuming minimal prior knowledge of the transmission and diagnostic test parameters. We performed a likelihood-based bootstrapping operation on the model to infer parameters based only on the recorded numbers of cattle testing positive for bTB at the start of each herd outbreak considering high- and low-risk areas separately. Models without herd heterogeneity are preferred in both areas though there is some evidence for super-spreading cattle. Similar to previous studies, we found low test sensitivities and high within-herd basic reproduction numbers (R0), suggesting that there may be many unobserved infections in cattle, even though the current testing regime is sufficient to control within-herd epidemics in most cases. Compared with other, more data-heavy approaches, the summary data used in our approach are easily collected, making our approach attractive for other systems. PMID:24718762

  9. Estimating epidemiological parameters for bovine tuberculosis in British cattle using a Bayesian partial-likelihood approach.

    PubMed

    O'Hare, A; Orton, R J; Bessell, P R; Kao, R R

    2014-05-22

    Fitting models with Bayesian likelihood-based parameter inference is becoming increasingly important in infectious disease epidemiology. Detailed datasets present the opportunity to identify subsets of these data that capture important characteristics of the underlying epidemiology. One such dataset describes the epidemic of bovine tuberculosis (bTB) in British cattle, which is also an important exemplar of a disease with a wildlife reservoir (the Eurasian badger). Here, we evaluate a set of nested dynamic models of bTB transmission, including individual- and herd-level transmission heterogeneity and assuming minimal prior knowledge of the transmission and diagnostic test parameters. We performed a likelihood-based bootstrapping operation on the model to infer parameters based only on the recorded numbers of cattle testing positive for bTB at the start of each herd outbreak considering high- and low-risk areas separately. Models without herd heterogeneity are preferred in both areas though there is some evidence for super-spreading cattle. Similar to previous studies, we found low test sensitivities and high within-herd basic reproduction numbers (R0), suggesting that there may be many unobserved infections in cattle, even though the current testing regime is sufficient to control within-herd epidemics in most cases. Compared with other, more data-heavy approaches, the summary data used in our approach are easily collected, making our approach attractive for other systems. PMID:24718762

  10. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives. PMID:25487423

  11. Estimating the likelihood of sustained virological response in chronic hepatitis C therapy.

    PubMed

    Mauss, S; Hueppe, D; John, C; Goelz, J; Heyne, R; Moeller, B; Link, R; Teuber, G; Herrmann, A; Spelter, M; Wollschlaeger, S; Baumgarten, A; Simon, K-G; Dikopoulos, N; Witthoeft, T

    2011-04-01

    The likelihood of a sustained virological response (SVR) is the most important factor for physicians and patients in the decision to initiate and continue therapy for chronic hepatitis C (CHC) infection. This study identified predictive factors for SVR with peginterferon plus ribavirin (RBV) in patients with CHC treated under 'real-life' conditions. The study cohort consisted of patients from a large, retrospective German multicentre, observational study who had been treated with peginterferon alfa-2a plus RBV or peginterferon alfa-2b plus RBV between the years 2000 and 2007. To ensure comparability regarding peginterferon therapies, patients were analysed in pairs matched by several baseline variables. Univariate and multivariate logistic regression analyses were used to determine the effect of nonmatched baseline variables and treatment modality on SVR. Among 2378 patients (1189 matched pairs), SVR rates were 57.9% overall, 46.5% in HCV genotype 1/4-infected patients and 77.3% in genotype 2/3-infected patients. In multivariate logistic regression analysis, positive predictors of SVR were HCV genotype 2 infection, HCV genotype 3 infection, low baseline viral load and treatment with peginterferon alfa-2a. Negative predictors of SVR were higher age (≥40 years), elevated baseline gamma-glutamyl transpeptidase (GGT) and low baseline platelet count (<150,000/μL). Among patients treated with peginterferon plus RBV in routine clinical practice, genotype, baseline viral load, age, GGT level and platelet levels all predict the likelihood of treatment success. In patients matched by baseline characteristics, treatment with peginterferon alfa-2a may be a positive predictor of SVR when compared to peginterferon alfa-2b. PMID:20849436

  12. Brain Structure Anomalies in Autism Spectrum Disorder—A Meta-Analysis of VBM Studies Using Anatomic Likelihood Estimation

    PubMed Central

    Nickl-Jockschat, Thomas; Habel, Ute; Michel, Tanja Maria; Manning, Janessa; Laird, Angela R.; Fox, Peter T.; Schneider, Frank; Eickhoff, Simon B.

    2016-01-01

    Autism spectrum disorders (ASD) are pervasive developmental disorders with characteristic core symptoms such as impairments in social interaction, deviance in communication, repetitive and stereotyped behavior, and impaired motor skills. Anomalies of brain structure have repeatedly been hypothesized to play a major role in the etiopathogenesis of the disorder. Our objective was to perform unbiased meta-analysis on brain structure changes as reported in the current ASD literature. We thus conducted a comprehensive search for morphometric studies by Pubmed query and literature review. We used a revised version of the activation likelihood estimation (ALE) approach for coordinate-based meta-analysis of neuroimaging results. Probabilistic cytoarchitectonic maps were applied to compare the localization of the obtained significant effects to histological areas. Each of the significant ALE clusters was analyzed separately for age effects on gray and white matter density changes. We found six significant clusters of convergence indicating disturbances in the brain structure of ASD patients, including the lateral occipital lobe, the pericentral region, the medial temporal lobe, the basal ganglia, and proximate to the right parietal operculum. Our study provides the first quantitative summary of brain structure changes reported in literature on autism spectrum disorders. In contrast to the rather small sample sizes of the original studies, our meta-analysis encompasses data of 277 ASD patients and 303 healthy controls. This unbiased summary provided evidence for consistent structural abnormalities in spite of heterogeneous diagnostic criteria and voxel-based morphometry (VBM) methodology, but also hinted at a dependency of VBM findings on the age of the patients. PMID:21692142

  13. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  14. MEMLET: An Easy-to-Use Tool for Data Fitting and Model Comparison Using Maximum-Likelihood Estimation.

    PubMed

    Woody, Michael S; Lewis, John H; Greenberg, Michael J; Goldman, Yale E; Ostap, E Michael

    2016-07-26

    We present MEMLET (MATLAB-enabled maximum-likelihood estimation tool), a simple-to-use and powerful program for utilizing maximum-likelihood estimation (MLE) for parameter estimation from data produced by single-molecule and other biophysical experiments. The program is written in MATLAB and includes a graphical user interface, making it simple to integrate into the existing workflows of many users without requiring programming knowledge. We give a comparison of MLE and other fitting techniques (e.g., histograms and cumulative frequency distributions), showing how MLE often outperforms other fitting methods. The program includes a variety of features. 1) MEMLET fits probability density functions (PDFs) for many common distributions (exponential, multiexponential, Gaussian, etc.), as well as user-specified PDFs without the need for binning. 2) It can take into account experimental limits on the size of the shortest or longest detectable event (i.e., instrument "dead time") when fitting to PDFs. The proper modification of the PDFs occurs automatically in the program and greatly increases the accuracy of fitting the rates and relative amplitudes in multicomponent exponential fits. 3) MEMLET offers model testing (i.e., single-exponential versus double-exponential) using the log-likelihood ratio technique, which shows whether additional fitting parameters are statistically justifiable. 4) Global fitting can be used to fit data sets from multiple experiments to a common model. 5) Confidence intervals can be determined via bootstrapping utilizing parallel computation to increase performance. Easy-to-follow tutorials show how these features can be used. This program packages all of these techniques into a simple-to-use and well-documented interface to increase the accessibility of MLE fitting. PMID:27463130

  15. A maximum likelihood direction of arrival estimation method for open-sphere microphone arrays in the spherical harmonic domain.

    PubMed

    Hu, Yuxiang; Lu, Jing; Qiu, Xiaojun

    2015-08-01

    Open-sphere microphone arrays are preferred over rigid-sphere arrays when minimal interaction between array and the measured sound field is required. However, open-sphere arrays suffer from poor robustness at null frequencies of the spherical Bessel function. This letter proposes a maximum likelihood method for direction of arrival estimation in the spherical harmonic domain, which avoids the division of the spherical Bessel function and can be used at arbitrary frequencies. Furthermore, the method can be easily extended to wideband implementation. Simulation and experiment results demonstrate the superiority of the proposed method over the commonly used methods in open-sphere configurations. PMID:26328695

  16. [Estimation of the recombination fraction by the maximum likelihood method in mapping interacting genes relative to marker loci].

    PubMed

    Priiatkina, S N

    2002-05-01

    For mapping nonlinked interacting genes relative to marker loci, the recombination fractions can be calculated by using the log-likelihood functions were derived that permit estimation of recombinant fractions by solving the ML equations on the basis of F2 data at various types of interaction. In some cases, the recombinant fraction estimates are obtained in the analytical form while in others they are numerically calculated from concrete experimental data. With the same type of epistasis the log-functions were shown to differ depending on the functional role (suppression or epistasis) of the mapped gene. Methods for testing the correspondence of the model and the recombination fraction estimates to the experimental data are discussed. In ambiguous cases, analysis of the linked marker behavior makes it possible to differentiate gene interaction from distorted single-locus segregation, which at some forms of interaction imitate phenotypic ratios. PMID:12068553

  17. Thinking Concretely Increases the Perceived Likelihood of Risks: The Effect of Construal Level on Risk Estimation.

    PubMed

    Lermer, Eva; Streicher, Bernhard; Sachs, Rainer; Raue, Martina; Frey, Dieter

    2016-03-01

    Recent findings on construal level theory (CLT) suggest that abstract thinking leads to a lower estimated probability of an event occurring compared to concrete thinking. We applied this idea to the risk context and explored the influence of construal level (CL) on the overestimation of small and underestimation of large probabilities for risk estimates concerning a vague target person (Study 1 and Study 3) and personal risk estimates (Study 2). We were specifically interested in whether the often-found overestimation of small probabilities could be reduced with abstract thinking, and the often-found underestimation of large probabilities was reduced with concrete thinking. The results showed that CL influenced risk estimates. In particular, a concrete mindset led to higher risk estimates compared to an abstract mindset for several adverse events, including events with small and large probabilities. This suggests that CL manipulation can indeed be used for improving the accuracy of lay people's estimates of small and large probabilities. Moreover, the results suggest that professional risk managers' risk estimates of common events (thus with a relatively high probability) could be improved by adopting a concrete mindset. However, the abstract manipulation did not lead managers to estimate extremely unlikely events more accurately. Potential reasons for different CL manipulation effects on risk estimates' accuracy between lay people and risk managers are discussed. PMID:26111548

  18. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  19. Maximum likelihood estimation of population growth rates based on the coalescent.

    PubMed Central

    Kuhner, M K; Yamato, J; Felsenstein, J

    1998-01-01

    We describe a method for co-estimating 4Nemu (four times the product of effective population size and neutral mutation rate) and population growth rate from sequence samples using Metropolis-Hastings sampling. Population growth (or decline) is assumed to be exponential. The estimates of growth rate are biased upwards, especially when 4Nemu is low; there is also a slight upwards bias in the estimate of 4Nemu itself due to correlation between the parameters. This bias cannot be attributed solely to Metropolis-Hastings sampling but appears to be an inherent property of the estimator and is expected to appear in any approach which estimates growth rate from genealogy structure. Sampling additional unlinked loci is much more effective in reducing the bias than increasing the number or length of sequences from the same locus. PMID:9584114

  20. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation.

    PubMed

    Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program. PMID:26610249

  1. Likelihood parameter estimation for calibrating a soil moisture using radar backscatter

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assimilating soil moisture information contained in synthetic aperture radar imagery into land surface model predictions can be done using a calibration, or parameter estimation, approach. The presence of speckle, however, necessitates aggregating backscatter measurements over large land areas in or...

  2. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  3. Process for estimating likelihood and confidence in post detonation nuclear forensics.

    SciTech Connect

    Darby, John L.; Craft, Charles M.

    2014-07-01

    Technical nuclear forensics (TNF) must provide answers to questions of concern to the broader community, including an estimate of uncertainty. There is significant uncertainty associated with post-detonation TNF. The uncertainty consists of a great deal of epistemic (state of knowledge) as well as aleatory (random) uncertainty, and many of the variables of interest are linguistic (words) and not numeric. We provide a process by which TNF experts can structure their process for answering questions and provide an estimate of uncertainty. The process uses belief and plausibility, fuzzy sets, and approximate reasoning.

  4. The Undiscovered Country: Can We Estimate the Likelihood of Extrasolar Planetary Habitability?

    NASA Astrophysics Data System (ADS)

    Unterborn, C. T.; Panero, W. R.; Hull, S. D.

    2015-12-01

    Plate tectonics have operated on Earth for a majority of its lifetime. Tectonics regulates atmospheric carbon and creates a planetary-scale water cycle, and is a primary factor in the Earth being habitable. While the mechanism for initiating tectonics is unknown, as we expand our search for habitable worlds, understanding which planetary compositions produce planets capable of supporting long-term tectonics is of paramount importance. On Earth, this sustentation of tectonics is a function of both its structure and composition. Currently, however, we have no method to measure the interior composition of exoplanets. In our Solar system, though, Solar abundances for refractory elements mirror the Earth's to within ~10%, allowing the adoption of Solar abundances as proxies for Earth's. It is not known, however, whether this mirroring of stellar and terrestrial planet abundances holds true for other star-planet systems without determination of the composition of initial planetesimals via condensation sequence calculations. Currently, all code for ascertaining these sequences are commercially available or closed-source. We present, then, the open-source Arbitrary Composition Condensation Sequence calculator (ArCCoS) for converting the elemental composition of a parent star to that of the planet-building material as well as the extent of oxidation within the planetesimals. These data allow us to constrain the likelihood for one of the main drivers for plate tectonics: the basalt to eclogite transition subducting plates. Unlike basalt, eclogite is denser than the surrounding mantle and thus sinks into the mantle, pulling the overlying slab with it. Without this higher density relative to the mantle, plates stagnate at shallow depths, shutting off plate tectonics. Using the results of ArCCoS as abundance inputs into the MELTS and HeFESTo thermodynamic models, we calculate phase relations for the first basaltic crust and depleted mantle of a terrestrial planet produced from

  5. Functional magnetic resonance imaging during emotion recognition in social anxiety disorder: an activation likelihood meta-analysis

    PubMed Central

    Hattingh, Coenraad J.; Ipser, J.; Tromp, S. A.; Syal, S.; Lochner, C.; Brooks, S. J.; Stein, D. J.

    2012-01-01

    Background: Social anxiety disorder (SAD) is characterized by abnormal fear and anxiety in social situations. Functional magnetic resonance imaging (fMRI) is a brain imaging technique that can be used to demonstrate neural activation to emotionally salient stimuli. However, no attempt has yet been made to statistically collate fMRI studies of brain activation, using the activation likelihood-estimate (ALE) technique, in response to emotion recognition tasks in individuals with SAD. Methods: A systematic search of fMRI studies of neural responses to socially emotive cues in SAD was undertaken. ALE meta-analysis, a voxel-based meta-analytic technique, was used to estimate the most significant activations during emotional recognition. Results: Seven studies were eligible for inclusion in the meta-analysis, constituting a total of 91 subjects with SAD, and 93 healthy controls. The most significant areas of activation during emotional vs. neutral stimuli in individuals with SAD compared to controls were: bilateral amygdala, left medial temporal lobe encompassing the entorhinal cortex, left medial aspect of the inferior temporal lobe encompassing perirhinal cortex and parahippocampus, right anterior cingulate, right globus pallidus, and distal tip of right postcentral gyrus. Conclusion: The results are consistent with neuroanatomic models of the role of the amygdala in fear conditioning, and the importance of the limbic circuitry in mediating anxiety symptoms. PMID:23335892

  6. A Monte Carlo Study of Marginal Maximum Likelihood Parameter Estimates for the Graded Model.

    ERIC Educational Resources Information Center

    Ankenmann, Robert D.; Stone, Clement A.

    Effects of test length, sample size, and assumed ability distribution were investigated in a multiple replication Monte Carlo study under the 1-parameter (1P) and 2-parameter (2P) logistic graded model with five score levels. Accuracy and variability of item parameter and ability estimates were examined. Monte Carlo methods were used to evaluate…

  7. Bayesian and Profile Likelihood Approaches to Time Delay Estimation for Stochastic Time Series of Gravitationally Lensed Quasars

    NASA Astrophysics Data System (ADS)

    Tak, Hyungsuk; Mandel, Kaisey; van Dyk, David A.; Kashyap, Vinay; Meng, Xiao-Li; Siemiginowska, Aneta

    2016-01-01

    The gravitational field of a galaxy can act as a lens and deflect the light emitted by a more distant object such as a quasar. If the galaxy is a strong gravitational lens, it can produce multiple images of the same quasar in the sky. Since the light in each gravitationally lensed image traverses a different path length and gravitational potential from the quasar to the Earth, fluctuations in the source brightness are observed in the several images at different times. We infer the time delay between these fluctuations in the brightness time series data of each image, which can be used to constrain cosmological parameters. Our model is based on a state-space representation for irregularly observed time series data generated from a latent continuous-time Ornstein-Uhlenbeck process. We account for microlensing variations via a polynomial regression in the model. Our Bayesian strategy adopts scientifically motivated hyper-prior distributions and a Metropolis-Hastings within Gibbs sampler. We improve the sampler by using an ancillarity-sufficiency interweaving strategy, and adaptive Markov chain Monte Carlo. We introduce a profile likelihood of the time delay as an approximation to the marginal posterior distribution of the time delay. The Bayesian and profile likelihood approaches complement each other, producing almost identical results; the Bayesian method is more principled but the profile likelihood is faster and simpler to implement. We demonstrate our estimation strategy using simulated data of doubly- and quadruply-lensed quasars from the Time Delay Challenge, and observed data of quasars Q0957+561 and J1029+2623.

  8. Bounds for Maximum Likelihood Regular and Non-Regular DoA Estimation in K-Distributed Noise

    NASA Astrophysics Data System (ADS)

    Abramovich, Yuri I.; Besson, Olivier; Johnson, Ben A.

    2015-11-01

    We consider the problem of estimating the direction of arrival of a signal embedded in $K$-distributed noise, when secondary data which contains noise only are assumed to be available. Based upon a recent formula of the Fisher information matrix (FIM) for complex elliptically distributed data, we provide a simple expression of the FIM with the two data sets framework. In the specific case of $K$-distributed noise, we show that, under certain conditions, the FIM for the deterministic part of the model can be unbounded, while the FIM for the covariance part of the model is always bounded. In the general case of elliptical distributions, we provide a sufficient condition for unboundedness of the FIM. Accurate approximations of the FIM for $K$-distributed noise are also derived when it is bounded. Additionally, the maximum likelihood estimator of the signal DoA and an approximated version are derived, assuming known covariance matrix: the latter is then estimated from secondary data using a conventional regularization technique. When the FIM is unbounded, an analysis of the estimators reveals a rate of convergence much faster than the usual $T^{-1}$. Simulations illustrate the different behaviors of the estimators, depending on the FIM being bounded or not.

  9. A flexible decision-aided maximum likelihood phase estimation in hybrid QPSK/OOK coherent optical WDM systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Yulong

    2016-04-01

    Although decision-aided (DA) maximum likelihood (ML) phase estimation (PE) algorithm has been investigated intensively, block length effect impacts system performance and leads to the increasing of hardware complexity. In this paper, a flexible DA-ML algorithm is proposed in hybrid QPSK/OOK coherent optical wavelength division multiplexed (WDM) systems. We present a general cross phase modulation (XPM) model based on Volterra series transfer function (VSTF) method to describe XPM effects induced by OOK channels at the end of dispersion management (DM) fiber links. Based on our model, the weighted factors obtained from maximum likelihood method are introduced to eliminate the block length effect. We derive the analytical expression of phase error variance for the performance prediction of coherent receiver with the flexible DA-ML algorithm. Bit error ratio (BER) performance is evaluated and compared through both theoretical derivation and Monte Carlo (MC) simulation. The results show that our flexible DA-ML algorithm has significant improvement in performance compared with the conventional DA-ML algorithm as block length is a fixed value. Compared with the conventional DA-ML with optimum block length, our flexible DA-ML can obtain better system performance. It means our flexible DA-ML algorithm is more effective for mitigating phase noise than conventional DA-ML algorithm.

  10. The high sensitivity of the maximum likelihood estimator method of tomographic image reconstruction

    SciTech Connect

    Llacer, J.; Veklerov, E.

    1987-01-01

    Positron Emission Tomography (PET) images obtained by the MLE iterative method of image reconstruction converge towards strongly deteriorated versions of the original source image. The image deterioration is caused by an excessive attempt by the algorithm to match the projection data with high counts. We can modulate this effect. We compared a source image with reconstructions by filtered backprojection to the MLE algorithm to show that the MLE images can have similar noise to the filtered backprojection images at regions of high activity and very low noise, comparable to the source image, in regions of low activity, if the iterative procedure is stopped at an appropriate point.

  11. A real-time signal combining system for Ka-band feed arrays using maximum-likelihood weight estimates

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1990-01-01

    A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.

  12. Study of an image restoration method based on Poisson-maximum likelihood estimation method for earthquake ruin scene

    NASA Astrophysics Data System (ADS)

    Song, Yanxing; Yang, Jingsong; Cheng, Lina; Liu, Shucong

    2014-09-01

    An image restoration method based on Poisson-maximum likelihood estimation method (PMLE) for earthquake ruin scene is proposed in this paper. The PMLE algorithm is introduced at first, and automatic acceleration method is used in the algorithm to accelerate the iterative process, then an image of earthquake ruin scene is processed with this image restoration method. The spectral correlation method and PSNR (peak signal-to-noise ratio) are chosen respectively to validate the restoration effect of the method, the simulation results show that iterations in this method will effect the PSNR of the processed image and operation time, and this method can restore image of earthquake ruin scene effectively and has a good practicability.

  13. Likelihood-based genetic mark-recapture estimates when genotype samples are incomplete and contain typing errors.

    PubMed

    Macbeth, Gilbert M; Broderick, Damien; Ovenden, Jennifer R; Buckworth, Rik C

    2011-11-01

    Genotypes produced from samples collected non-invasively in harsh field conditions often lack the full complement of data from the selected microsatellite loci. The application to genetic mark-recapture methodology in wildlife species can therefore be prone to misidentifications leading to both 'true non-recaptures' being falsely accepted as recaptures (Type I errors) and 'true recaptures' being undetected (Type II errors). Here we present a new likelihood method that allows every pairwise genotype comparison to be evaluated independently. We apply this method to determine the total number of recaptures by estimating and optimising the balance between Type I errors and Type II errors. We show through simulation that the standard error of recapture estimates can be minimised through our algorithms. Interestingly, the precision of our recapture estimates actually improved when we included individuals with missing genotypes, as this increased the number of pairwise comparisons potentially uncovering more recaptures. Simulations suggest that the method is tolerant to per locus error rates of up to 5% per locus and can theoretically work in datasets with as little as 60% of loci genotyped. Our methods can be implemented in datasets where standard mismatch analyses fail to distinguish recaptures. Finally, we show that by assigning a low Type I error rate to our matching algorithms we can generate a dataset of individuals of known capture histories that is suitable for the downstream analysis with traditional mark-recapture methods. PMID:21763337

  14. Developing New Rainfall Estimates to Identify the Likelihood of Agricultural Drought in Mesoamerica

    NASA Astrophysics Data System (ADS)

    Pedreros, D. H.; Funk, C. C.; Husak, G. J.; Michaelsen, J.; Peterson, P.; Lasndsfeld, M.; Rowland, J.; Aguilar, L.; Rodriguez, M.

    2012-12-01

    The population in Central America was estimated at ~40 million people in 2009, with 65% in rural areas directly relying on local agricultural production for subsistence, and additional urban populations relying on regional production. Mapping rainfall patterns and values in Central America is a complex task due to the rough topography and the influence of two oceans on either side of this narrow land mass. Characterization of precipitation amounts both in time and space is of great importance for monitoring agricultural food production for food security analysis. With the goal of developing reliable rainfall fields, the Famine Early warning Systems Network (FEWS NET) has compiled a dense set of historical rainfall stations for Central America through cooperation with meteorological services and global databases. The station database covers the years 1900-present with the highest density between 1970-2011. Interpolating station data by themselves does not provide a reliable result because it ignores topographical influences which dominate the region. To account for this, climatological rainfall fields were used to support the interpolation of the station data using a modified Inverse Distance Weighting process. By blending the station data with the climatological fields, a historical rainfall database was compiled for 1970-2011 at a 5km resolution for every five day interval. This new database opens the door to analysis such as the impact of sea surface temperature on rainfall patterns, changes to the typical dry spell during the rainy season, characterization of drought frequency and rainfall trends, among others. This study uses the historical database to identify the frequency of agricultural drought in the region and explores possible changes in precipitation patterns during the past 40 years. A threshold of 500mm of rainfall during the growing season was used to define agricultural drought for maize. This threshold was selected based on assessments of crop

  15. Cultivation and counter cultivation: does religiosity shape the relationship between television viewing and estimates of crime prevalence and assessment of victimization likelihood?

    PubMed

    Hetsroni, Amir; Lowenstein, Hila

    2013-02-01

    Religiosity may change the direction of the effect of TV viewing on assessment of the likelihood of personal victimization and estimates concerning crime prevalence. A content analysis of a representative sample of TV programming (56 hours of prime-time shows) was done to identify the most common crimes on television, followed by a survey of a representative sample of the adult public in a large urban district (778 respondents) who were asked to estimate the prevalence of these crimes and to assess the likelihood of themselves being victimized. People who defined themselves as non-religious increased their estimates of prevalence for crimes often depicted on TV, as they reported more time watching TV (ordinary cultivation effect), whereas estimates regarding the prevalence of crime and assessment of victimization likelihood among religious respondents were lower with reports of more time devoted to television viewing (counter-cultivation effect). PMID:23654044

  16. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement.

    PubMed

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain's response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  17. MLE (Maximum Likelihood Estimator) reconstruction of a brain phantom using a Monte Carlo transition matrix and a statistical stopping rule

    SciTech Connect

    Veklerov, E.; Llacer, J.; Hoffman, E.J.

    1987-10-01

    In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data.

  18. Rapid radiation events in the family Ursidae indicated by likelihood phylogenetic estimation from multiple fragments of mtDNA.

    PubMed

    Waits, L P; Sullivan, J; O'Brien, S J; Ward, R H

    1999-10-01

    The bear family (Ursidae) presents a number of phylogenetic ambiguities as the evolutionary relationships of the six youngest members (ursine bears) are largely unresolved. Recent mitochondrial DNA analyses have produced conflicting results with respect to the phylogeny of ursine bears. In an attempt to resolve these issues, we obtained 1916 nucleotides of mitochondrial DNA sequence data from six gene segments for all eight bear species and conducted maximum likelihood and maximum parsimony analyses on all fragments separately and combined. All six single-region gene trees gave different phylogenetic estimates; however, only for control region data was this significantly incongruent with the results from the combined data. The optimal phylogeny for the combined data set suggests that the giant panda is most basal followed by the spectacled bear. The sloth bear is the basal ursine bear, and there is weak support for a sister taxon relationship of the American and Asiatic black bears. The sun bear is sister taxon to the youngest clade containing brown bears and polar bears. Statistical analyses of alternate hypotheses revealed a lack of strong support for many of the relationships. We suggest that the difficulties surrounding the resolution of the evolutionary relationships of the Ursidae are linked to the existence of sequential rapid radiation events in bear evolution. Thus, unresolved branching orders during these time periods may represent an accurate representation of the evolutionary history of bear species. PMID:10508542

  19. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    PubMed Central

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  20. ROC (Receiver Operating Characteristics) study of maximum likelihood estimator human brain image reconstructions in PET (Positron Emission Tomography) clinical practice

    SciTech Connect

    Llacer, J.; Veklerov, E.; Nolan, D. ); Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J. )

    1990-10-01

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of {sup 18}F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab.

  1. Separating components of variation in measurement series using maximum likelihood estimation. Application to patient position data in radiotherapy

    NASA Astrophysics Data System (ADS)

    Sage, J. P.; Mayles, W. P. M.; Mayles, H. M.; Syndikus, I.

    2014-10-01

    Maximum likelihood estimation (MLE) is presented as a statistical tool to evaluate the contribution of measurement error to any measurement series where the same quantity is measured using different independent methods. The technique was tested against artificial data sets; generated for values of underlying variation in the quantity and measurement error between 0.5 mm and 3 mm. In each case the simulation parameters were determined within 0.1 mm. The technique was applied to analyzing external random positioning errors from positional audit data for 112 pelvic radiotherapy patients. Patient position offsets were measured using portal imaging analysis and external body surface measures. Using MLE to analyze all methods in parallel it was possible to ascertain the measurement error for each method and the underlying positional variation. In the (AP / Lat / SI) directions the standard deviations of the measured patient position errors from portal imaging were (3.3 mm / 2.3 mm / 1.9 mm), arising from underlying variations of (2.7 mm / 1.5 mm / 1.4 mm) and measurement uncertainties of (1.8 mm / 1.8 mm / 1.3 mm), respectively. The measurement errors agree well with published studies. MLE used in this manner could be applied to any study in which the same quantity is measured using independent methods.

  2. Decision-aided maximum likelihood phase estimation with optimum block length in hybrid QPSK/16QAM coherent optical WDM systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Yulong

    2016-01-01

    We propose a general model to entirely describe XPM effects induced by 16QAM channels in hybrid QPSK/16QAM wavelength division multiplexed (WDM) systems. A power spectral density (PSD) formula is presented to predict the statistical properties of XPM effects at the end of dispersion management (DM) fiber links. We derive the analytical expression of phase error variance for optimizing block length of QPSK channel coherent receiver with decision-aided (DA) maximum-likelihood (ML) phase estimation (PE). With our theoretical analysis, the optimum block length can be employed to improve the performance of coherent receiver. Bit error rate (BER) performance in QPSK channel is evaluated and compared through both theoretical derivation and Monte Carlo simulation. The results show that by using the DA-ML with optimum block length, bit signal-to-noise ratio (SNR) improvement over DA-ML with fixed block length of 10, 20 and 40 at BER of 10-3 is 0.18 dB, 0.46 dB and 0.65 dB, respectively, when in-line residual dispersion is 0 ps/nm.

  3. Population pharmacokinetics of docetaxel during phase I studies using nonlinear mixed-effect modeling and nonparametric maximum-likelihood estimation.

    PubMed

    Launay-Iliadis, M C; Bruno, R; Cosson, V; Vergniol, J C; Oulid-Aissa, D; Marty, M; Clavel, M; Aapro, M; Le Bail, N; Iliadis, A

    1995-01-01

    Docetaxel, a novel anticancer agent, was given to 26 patients by short i.v. infusion (1-2 h) at various dose levels (70-115 mg/m2, the maximum tolerated dose) during 2 phase I studies. Two population analyses, one using NONMEM (nonlinear mixed-effect modeling) and the other using NPML (nonparametric maximum-likelihood), were performed sequentially to determine the structural model; estimate the mean population parameters, including clearance (Cl) and interindividual variability; and find influences of demographic covariates on them. Nine covariates were included in the analyses: age, height, weight, body surface area, sex, performance status, presence of liver metastasis, dose level, and type of formulation. A three-compartment model gave the best fit to the data, and the final NONMEM regression model for Cl was Cl = BSA(Theta1 + Theta02 x AGE), expressing Cl (in liters per hour) directly as a function of body surface area. Only these two covariates were considered in the NPML analysis to confirm the results found by NONMEM. Using NONMEM [for a patient with mean AGE (52.3 years) and mean BSA (1.68 m2)] and NPML, docetaxel Cl was estimated to be 35.6 l/h (21.2 lh-1 m-2) and 37.2 l/h with interpatient coefficients of variations (CVs) of 17.4% and 24.8%, respectively. The intraindividual CV was estimated at 23.8% by NONMEM; the corresponding variability was fixed in NPML in an additive Gaussian variance error model with a 20% CV. Discrepancies were found in the mean volume at steady state (Vss; 83.21 for NPML versus 1241 for NONMEM) and in terminal half-lives, notably the mean t1/2 gamma, which was shorter as determined by NPML (7.89 versus 12.2 h), although the interindividual CV was 89.1% and 62.7% for Vss and t1/2 gamma, respectively. However, the NPML-estimated probability density function (pdf) of t1/2 gamma was bimodal (5 and 11.4 h), probably due to the imbalance of the data. Both analyses suggest a similar magnitude of mean Cl decrease with small BSA and

  4. Beyond Roughness: Maximum-Likelihood Estimation of Topographic "Structure" on Venus and Elsewhere in the Solar System

    NASA Astrophysics Data System (ADS)

    Simons, F. J.; Eggers, G. L.; Lewis, K. W.; Olhede, S. C.

    2015-12-01

    What numbers "capture" topography? If stationary, white, and Gaussian: mean and variance. But "whiteness" is strong; we are led to a "baseline" over which to compute means and variances. We then have subscribed to topography as a correlated process, and to the estimation (noisy, afftected by edge effects) of the parameters of a spatial or spectral covariance function. What if the covariance function or the point process itself aren't Gaussian? What if the region under study isn't regularly shaped or sampled? How can results from differently sized patches be compared robustly? We present a spectral-domain "Whittle" maximum-likelihood procedure that circumvents these difficulties and answers the above questions. The key is the Matern form, whose parameters (variance, range, differentiability) define the shape of the covariance function (Gaussian, exponential, ..., are all special cases). We treat edge effects in simulation and in estimation. Data tapering allows for the irregular regions. We determine the estimation variance of all parameters. And the "best" estimate may not be "good enough": we test whether the "model" itself warrants rejection. We illustrate our methodology on geologically mapped patches of Venus. Surprisingly few numbers capture planetary topography. We derive them, with uncertainty bounds, we simulate "new" realizations of patches that look to the geologists exactly as if they were derived from similar processes. Our approach holds in 1, 2, and 3 spatial dimensions, and generalizes to multiple variables, e.g. when topography and gravity are being considered jointly (perhaps linked by flexural rigidity, erosion, or other surface and sub-surface modifying processes). Our results have widespread implications for the study of planetary topography in the Solar System, and are interpreted in the light of trying to derive "process" from "parameters", the end goal to assign likely formation histories for the patches under consideration. Our results

  5. Maximum likelihood estimation of label imperfection probabilities and its use in the identification of mislabeled patterns. [with application to Landsat MSS data processing

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1980-01-01

    Estimating label imperfections and the use of estimations in the identification of mislabeled patterns are discussed. Expressions are presented for the asymptotic variances of the probability of correct classification and proportion, and for the maximum likelihood estimates of classification errors and a priori probabilities. Models are developed for imperfections in the labels and classification errors, and expressions are derived for the probability of imperfect label identification schemes resulting in wrong decisions. The expressions are used in computing thresholds and the techniques are given practical applications. The imperfect label identification scheme in the multiclass case is found to amount to establishing a region around each decision surface, and decisions of the label correction scheme are found in close agreement with the analyst-interpreter interpretations of the imagery films. As an example, the application of the maximum likelihood estimation to the processing of Landsat MSS data is discussed.

  6. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast.

    PubMed

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M

    2016-04-21

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented. PMID:27025665

  7. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast

    NASA Astrophysics Data System (ADS)

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.

    2016-04-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  8. Reducing the likelihood of future human activities that could affect geologic high-level waste repositories

    SciTech Connect

    Not Available

    1984-05-01

    The disposal of radioactive wastes in deep geologic formations provides a means of isolating the waste from people until the radioactivity has decayed to safe levels. However, isolating people from the wastes is a different problem, since we do not know what the future condition of society will be. The Human Interference Task Force was convened by the US Department of Energy to determine whether reasonable means exist (or could be developed) to reduce the likelihood of future human unintentionally intruding on radioactive waste isolation systems. The task force concluded that significant reductions in the likelihood of human interference could be achieved, for perhaps thousands of years into the future, if appropriate steps are taken to communicate the existence of the repository. Consequently, for two years the task force directed most of its study toward the area of long-term communication. Methods are discussed for achieving long-term communication by using permanent markers and widely disseminated records, with various steps taken to provide multiple levels of protection against loss, destruction, and major language/societal changes. Also developed is the concept of a universal symbol to denote Caution - Biohazardous Waste Buried Here. If used for the thousands of non-radioactive biohazardous waste sites in this country alone, a symbol could transcend generations and language changes, thereby vastly improving the likelihood of successful isolation of all buried biohazardous wastes.

  9. Item-Weighted Likelihood Method for Ability Estimation in Tests Composed of Both Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Tao, Jian; Shi, Ning-Zhong; Chang, Hua-Hua

    2012-01-01

    For mixed-type tests composed of both dichotomous and polytomous items, polytomous items often yield more information than dichotomous ones. To reflect the difference between the two types of items, polytomous items are usually pre-assigned with larger weights. We propose an item-weighted likelihood method to better assess examinees' ability…

  10. Necessary conditions for a maximum likelihood estimate to become asymptotically unbiased and attain the Cramer-Rao lower bound. Part I. General approach with an application to time-delay and Doppler shift estimation.

    PubMed

    Naftali, E; Makris, N C

    2001-10-01

    Analytic expressions for the first order bias and second order covariance of a general maximum likelihood estimate (MLE) are presented. These expressions are used to determine general analytic conditions on sample size, or signal-to-noise ratio (SNR), that are necessary for a MLE to become asymptotically unbiased and attain minimum variance as expressed by the Cramer-Rao lower bound (CRLB). The expressions are then evaluated for multivariate Gaussian data. The results can be used to determine asymptotic biases. variances, and conditions for estimator optimality in a wide range of inverse problems encountered in ocean acoustics and many other disciplines. The results are then applied to rigorously determine conditions on SNR necessary for the MLE to become unbiased and attain minimum variance in the classical active sonar and radar time-delay and Doppler-shift estimation problems. The time-delay MLE is the time lag at the peak value of a matched filter output. It is shown that the matched filter estimate attains the CRLB for the signal's position when the SNR is much larger than the kurtosis of the expected signal's energy spectrum. The Doppler-shift MLE exhibits dual behavior for narrow band analytic signals. In a companion paper, the general theory presented here is applied to the problem of estimating the range and depth of an acoustic source submerged in an ocean waveguide. PMID:11681372

  11. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  12. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    PubMed

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777

  13. Sequential transformation for multiple traits for estimation of (co)variance components with a derivative-free algorithm for restricted maximum likelihood.

    PubMed

    Van Vleck, L D; Boldman, K G

    1993-04-01

    Transformation of multiple-trait records that undergo sequential selection can be used with derivative-free algorithms to maximize the restricted likelihood in estimation of covariance matrices as with derivative methods. Data transformation with appropriate parts of the Choleski decomposition of the current estimate of the residual covariance matrix results in mixed-model equations that are easily modified from round to round for calculation of the logarithm of the likelihood. The residual sum of squares is the same for transformed and untransformed analyses. Most importantly, the logarithm of the determinant of the untransformed coefficient matrix is an easily determined function of the Choleski decomposition of the residual covariance matrix and the determinant of the transformed coefficient matrix. Thus, the logarithm of the likelihood for any combination of covariance matrices can be determined from the transformed equations. Advantages of transformation are 1) the multiple-trait mixed-model equations are easy to set up, 2) the least squares part of the equations does not change from round to round, 3) right-hand sides change from round to round by constant multipliers, and 4) less memory is required. An example showed only a slight advantage of the transformation compared with no transformation in terms of solution time for each round (1 to 5%). PMID:8478285

  14. Efficient Parameter Estimation of Generalizable Coarse-Grained Protein Force Fields Using Contrastive Divergence: A Maximum Likelihood Approach

    PubMed Central

    2013-01-01

    Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/. PMID:24683370

  15. Neuroanatomical substrates of action perception and understanding: an anatomic likelihood estimation meta-analysis of lesion-symptom mapping studies in brain injured patients

    PubMed Central

    Urgesi, Cosimo; Candidi, Matteo; Avenanti, Alessio

    2014-01-01

    Several neurophysiologic and neuroimaging studies suggested that motor and perceptual systems are tightly linked along a continuum rather than providing segregated mechanisms supporting different functions. Using correlational approaches, these studies demonstrated that action observation activates not only visual but also motor brain regions. On the other hand, brain stimulation and brain lesion evidence allows tackling the critical question of whether our action representations are necessary to perceive and understand others’ actions. In particular, recent neuropsychological studies have shown that patients with temporal, parietal, and frontal lesions exhibit a number of possible deficits in the visual perception and the understanding of others’ actions. The specific anatomical substrates of such neuropsychological deficits however, are still a matter of debate. Here we review the existing literature on this issue and perform an anatomic likelihood estimation meta-analysis of studies using lesion-symptom mapping methods on the causal relation between brain lesions and non-linguistic action perception and understanding deficits. The meta-analysis encompassed data from 361 patients tested in 11 studies and identified regions in the inferior frontal cortex, the inferior parietal cortex and the middle/superior temporal cortex, whose damage is consistently associated with poor performance in action perception and understanding tasks across studies. Interestingly, these areas correspond to the three nodes of the action observation network that are strongly activated in response to visual action perception in neuroimaging research and that have been targeted in previous brain stimulation studies. Thus, brain lesion mapping research provides converging causal evidence that premotor, parietal and temporal regions play a crucial role in action recognition and understanding. PMID:24910603

  16. Application of maximum likelihood estimator in nano-scale optical path length measurement using spectral-domain optical coherence phase microscopy

    PubMed Central

    Motaghian Nezam, S. M. R.; Joo, C; Tearney, G. J.; de Boer, J. F.

    2009-01-01

    Spectral-domain optical coherence phase microscopy (SD-OCPM) measures minute phase changes in transparent biological specimens using a common path interferometer and a spectrometer based optical coherence tomography system. The Fourier transform of the acquired interference spectrum in spectral-domain optical coherence tomography (SD-OCT) is complex and the phase is affected by contributions from inherent random noise. To reduce this phase noise, knowledge of the probability density function (PDF) of data becomes essential. In the present work, the intensity and phase PDFs of the complex interference signal are theoretically derived and the optical path length (OPL) PDF is experimentally validated. The full knowledge of the PDFs is exploited for optimal estimation (Maximum Likelihood estimation) of the intensity, phase, and signal-to-noise ratio (SNR) in SD-OCPM. Maximum likelihood (ML) estimates of the intensity, SNR, and OPL images are presented for two different scan modes using Bovine Pulmonary Artery Endothelial (BPAE) cells. To investigate the phase accuracy of SD-OCPM, we experimentally calculate and compare the cumulative distribution functions (CDFs) of the OPL standard deviation and the square root of the Cramér-Rao lower bound (1/2SNR) over 100 BPAE images for two different scan modes. The correction to the OPL measurement by applying ML estimation to SD-OCPM for BPAE cells is demonstrated. PMID:18957999

  17. An Estimate of the Likelihood for a Climatically Significant Volcanic Eruption Within the Present Decade (2000-2009)

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Franklin, M. Rose (Technical Monitor)

    2000-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (i.e., those having a volcanic explosivity index, or VEI, equal to 4 or larger) per decade is found to span 2-11, with 96% located in the tropics and extra-tropical Northern Hemisphere, A two-point moving average of the time series has higher values since the 1860s than before, measuring 8.00 in the 1910s (the highest value) and measuring 6.50 in the 1980s, the highest since the 18 1 0s' peak. On the basis of the usual behavior of the first difference of the two-point moving averages, one infers that the two-point moving average for the 1990s will measure about 6.50 +/- 1.00, implying that about 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially, those having VEI equal to 5 or larger) nearly always have been associated with episodes of short-term global cooling, the occurrence of even one could ameliorate the effects of global warming. Poisson probability distributions reveal that the probability of one or more VEI equal to 4 or larger events occurring within the next ten years is >99%, while it is about 49% for VEI equal to 5 or larger events and 18% for VEI equal to 6 or larger events. Hence, the likelihood that a, climatically significant volcanic eruption will occur within the next 10 years appears reasonably high.

  18. User's guide: Nimbus-7 Earth radiation budget narrow-field-of-view products. Scene radiance tape products, sorting into angular bins products, and maximum likelihood cloud estimation products

    NASA Technical Reports Server (NTRS)

    Kyle, H. Lee; Hucek, Richard R.; Groveman, Brian; Frey, Richard

    1990-01-01

    The archived Earth radiation budget (ERB) products produced from the Nimbus-7 ERB narrow field-of-view scanner are described. The principal products are broadband outgoing longwave radiation (4.5 to 50 microns), reflected solar radiation (0.2 to 4.8 microns), and the net radiation. Daily and monthly averages are presented on a fixed global equal area (500 sq km), grid for the period May 1979 to May 1980. Two independent algorithms are used to estimate the outgoing fluxes from the observed radiances. The algorithms are described and the results compared. The products are divided into three subsets: the Scene Radiance Tapes (SRT) contain the calibrated radiances; the Sorting into Angular Bins (SAB) tape contains the SAB produced shortwave, longwave, and net radiation products; and the Maximum Likelihood Cloud Estimation (MLCE) tapes contain the MLCE products. The tape formats are described in detail.

  19. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. PMID:27038887

  20. Uncertainty assessment in watershed-scale water quality modeling and management: 1. Framework and application of generalized likelihood uncertainty estimation (GLUE) approach

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Keller, Arturo A.

    2007-08-01

    Watershed-scale water quality models involve substantial uncertainty in model output because of sparse water quality observations and other sources of uncertainty. Assessing the uncertainty is very important for those who use the models to support management decision making. Systematic uncertainty analysis for these models has rarely been done and remains a major challenge. This study aimed (1) to develop a framework to characterize all important sources of uncertainty and their interactions in management-oriented watershed modeling, (2) to apply the generalized likelihood uncertainty estimation (GLUE) approach for quantifying simulation uncertainty for complex watershed models, and (3) to investigate the influence of subjective choices (especially the likelihood measure) in a GLUE analysis, as well as the availability of observational data, on the outcome of the uncertainty analysis. A two-stage framework was first established as the basis for uncertainty assessment and probabilistic decision-making. A watershed model (watershed analysis risk management framework (WARMF)) was implemented using data from the Santa Clara River Watershed in southern California. A typical catchment was constructed on which a series of experiments was conducted. The results show that GLUE can be implemented with affordable computational cost, yielding insights into the model behavior. However, in complex watershed water quality modeling, the uncertainty results highly depend on the subjective choices made by the modeler as well as the availability of observational data. The importance of considering management concerns in the uncertainty estimation was also demonstrated. Overall, this study establishes guidance for uncertainty assessment in management-oriented watershed modeling. The study results have suggested future efforts we could make in a GLUE-based uncertainty analysis, which has led to the development of a new method, as will be introduced in a companion paper. Eventually, the

  1. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  2. Joint estimation of soil moisture profile and hydraulic parameters by ground-penetrating radar data assimilation with maximum likelihood ensemble filter

    NASA Astrophysics Data System (ADS)

    Tran, Anh Phuong; Vanclooster, Marnik; Zupanski, Milija; Lambot, Sébastien

    2014-04-01

    Ground-Penetrating Radar (GPR) has recently become a powerful geophysical technique to characterize soil moisture at the field scale. We developed a data assimilation scheme to simultaneously estimate the vertical soil moisture profile and hydraulic parameters from time-lapse GPR measurements. The assimilation scheme includes a soil hydrodynamic model to simulate the soil moisture dynamics, a full-wave electromagnetic wave propagation model, and petrophysical relationship to link the state variable with the GPR data and a maximum likelihood ensemble assimilation algorithm. The hydraulic parameters are estimated jointly with the soil moisture using a state augmentation technique. The approach allows for the direct assimilation of GPR data, thus maximizing the use of the information. The proposed approach was validated by numerical experiments assuming wrong initial conditions and hydraulic parameters. The synthetic soil moisture profiles were generated by the Hydrus-1D model, which then were used by the electromagnetic model and petrophysical relationship to create "observed" GPR data. The results show that the data assimilation significantly improves the accuracy of the hydrodynamic model prediction. Compared with the surface soil moisture assimilation, the GPR data assimilation better estimates the soil moisture profile and hydraulic parameters. The results also show that the estimated soil moisture profile in the loamy sand and silt soils converge to the "true" state more rapidly than in the clay one. Of the three unknown parameters of the Mualem-van Genuchten model, the estimation of n is more accurate than that of α and Ks. The approach shows a great promise to use GPR measurements for the soil moisture profile and hydraulic parameter estimation at the field scale.

  3. A simple three-stage carrier phase estimation algorithm for 16-QAM systems based on QPSK partitioning and maximum likelihood detection

    NASA Astrophysics Data System (ADS)

    Chen, Yin; Huang, Xuguang

    2015-05-01

    This paper proposes a simple three-stage carrier phase estimation (CPE) algorithm for 16-ary quadrature amplitude modulation (16-QAM) optical coherent systems, based on a simplified quadrature phase shift keying partition (QPSK-partitioning) scheme for the first stage and the maximum likelihood (ML) detection for the second and the third stage. Only 25% of the symbols of 16-QAM systems are employed for the first stage phase estimation, while only 50% are used for the second stage phase estimation. Therefore, the computational complexity of the proposed three-stage CPE algorithm for 16-QAM systems is similar to that of the QPSK-partitioning+ML algorithm. The performance of two different ML detections is compared and the simulation results show that the "constellation-assisted" ML detection can achieve better linewidth tolerance performance than the "conventional" ML detection for 16-QAM systems. A combined linewidth symbol duration product of 1 ×10-4 is tolerable for a signal noise ratio (SNR) sensitivity penalty of 0.8 dB at a BER of 1 ×10-3 , based on the block averaging instead of the sliding window averaging. A good bit error rate (BER) performance for the proposed three-stage CPE algorithm is achieved especially at high SNR levels in the simulation. The performance of the proposed three-stage CPE algorithm is similar to that of the BPS algorithm with 22 test phase angles, but with reducing the computational complexity by a factor of about 5.3.

  4. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  5. Maximum likelihood estimate of life expectancy in the prehistoric Jomon: Canine pulp volume reduction suggests a longer life expectancy than previously thought.

    PubMed

    Sasaki, Tomohiko; Kondo, Osamu

    2016-09-01

    Recent theoretical progress potentially refutes past claims that paleodemographic estimations are flawed by statistical problems, including age mimicry and sample bias due to differential preservation. The life expectancy at age 15 of the Jomon period prehistoric populace in Japan was initially estimated to have been ∼16 years while a more recent analysis suggested 31.5 years. In this study, we provide alternative results based on a new methodology. The material comprises 234 mandibular canines from Jomon period skeletal remains and a reference sample of 363 mandibular canines of recent-modern Japanese. Dental pulp reduction is used as the age-indicator, which because of tooth durability is presumed to minimize the effect of differential preservation. Maximum likelihood estimation, which theoretically avoids age mimicry, was applied. Our methods also adjusted for the known pulp volume reduction rate among recent-modern Japanese to provide a better fit for observations in the Jomon period sample. Without adjustment for the known rate in pulp volume reduction, estimates of Jomon life expectancy at age 15 were dubiously long. However, when the rate was adjusted, the estimate results in a value that falls within the range of modern hunter-gatherers, with significantly better fit to the observations. The rate-adjusted result of 32.2 years more likely represents the true life expectancy of the Jomon people at age 15, than the result without adjustment. Considering ∼7% rate of antemortem loss of the mandibular canine observed in our Jomon period sample, actual life expectancy at age 15 may have been as high as ∼35.3 years. PMID:27346085

  6. Stimulus Complexity and Categorical Effects in Human Auditory Cortex: An Activation Likelihood Estimation Meta-Analysis

    PubMed Central

    Samson, Fabienne; Zeffiro, Thomas A.; Toussaint, Alain; Belin, Pascal

    2011-01-01

    Investigations of the functional organization of human auditory cortex typically examine responses to different sound categories. An alternative approach is to characterize sounds with respect to their amount of variation in the time and frequency domains (i.e., spectral and temporal complexity). Although the vast majority of published studies examine contrasts between discrete sound categories, an alternative complexity-based taxonomy can be evaluated through meta-analysis. In a quantitative meta-analysis of 58 auditory neuroimaging studies, we examined the evidence supporting current models of functional specialization for auditory processing using grouping criteria based on either categories or spectro-temporal complexity. Consistent with current models, analyses based on typical sound categories revealed hierarchical auditory organization and left-lateralized responses to speech sounds, with high speech sensitivity in the left anterior superior temporal cortex. Classification of contrasts based on spectro-temporal complexity, on the other hand, revealed a striking within-hemisphere dissociation in which caudo-lateral temporal regions in auditory cortex showed greater sensitivity to spectral changes, while anterior superior temporal cortical areas were more sensitive to temporal variation, consistent with recent findings in animal models. The meta-analysis thus suggests that spectro-temporal acoustic complexity represents a useful alternative taxonomy to investigate the functional organization of human auditory cortex. PMID:21833294

  7. In vivo thickness dynamics measurement of tear film lipid and aqueous layers with optical coherence tomography and maximum-likelihood estimation.

    PubMed

    Huang, Jinxin; Hindman, Holly B; Rolland, Jannick P

    2016-05-01

    Dry eye disease (DED) is a common ophthalmic condition that is characterized by tear film instability and leads to ocular surface discomfort and visual disturbance. Advancements in the understanding and management of this condition have been limited by our ability to study the tear film secondary to its thin structure and dynamic nature. Here, we report a technique to simultaneously estimate the thickness of both the lipid and aqueous layers of the tear film in vivo using optical coherence tomography and maximum-likelihood estimation. After a blink, the lipid layer was rapidly thickened at an average rate of 10  nm/s over the first 2.5 s before stabilizing, whereas the aqueous layer continued thinning at an average rate of 0.29  μm/s of the 10 s blink cycle. Further development of this tear film imaging technique may allow for the elucidation of events that trigger tear film instability in DED. PMID:27128054

  8. Modeling the impact of hepatitis C viral clearance on end-stage liver disease in an HIV co-infected cohort with Targeted Maximum Likelihood Estimation

    PubMed Central

    Schnitzer, Mireille E; Moodie, Erica EM; van der Laan, Mark J; Platt, Robert W; Klein, Marina B

    2013-01-01

    Summary Despite modern effective HIV treatment, hepatitis C virus (HCV) co-infection is associated with a high risk of progression to end-stage liver disease (ESLD) which has emerged as the primary cause of death in this population. Clinical interest lies in determining the impact of clearance of HCV on risk for ESLD. In this case study, we examine whether HCV clearance affects risk of ESLD using data from the multicenter Canadian Co-infection Cohort Study. Complications in this survival analysis arise from the time-dependent nature of the data, the presence of baseline confounders, loss to follow-up, and confounders that change over time, all of which can obscure the causal effect of interest. Additional challenges included non-censoring variable missingness and event sparsity. In order to efficiently estimate the ESLD-free survival probabilities under a specific history of HCV clearance, we demonstrate the doubly-robust and semiparametric efficient method of Targeted Maximum Likelihood Estimation (TMLE). Marginal structural models (MSM) can be used to model the effect of viral clearance (expressed as a hazard ratio) on ESLD-free survival and we demonstrate a way to estimate the parameters of a logistic model for the hazard function with TMLE. We show the theoretical derivation of the efficient influence curves for the parameters of two different MSMs and how they can be used to produce variance approximations for parameter estimates. Finally, the data analysis evaluating the impact of HCV on ESLD was undertaken using multiple imputations to account for the non-monotone missing data. PMID:24571372

  9. Estimating Effective Elastic Thickness on Venus from Gravity and Topography: Robust Results from Multi-taper and Maximum-Likelihood Analysis

    NASA Astrophysics Data System (ADS)

    Eggers, G. L.; Lewis, K. W.; Simons, F. J.

    2012-12-01

    Venus has undergone a markedly different evolution than Earth. Its tectonics do not resemble the plate-tectonic system observed on Earth, and many surface features—such as tesserae and coronae—lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere. Lithospheric parameters such as the effective elastic thickness have previously been estimated from the correlation between topography and gravity anomalies, either in the space domain or the spectral domain (where admittance or coherence functions are estimated). Correlation and spectral analyses that have been obtained on Venus have been limited by geometry (typically, only rectangular or circular data windows were used), and most have lacked robust error estimates. There are two levels of error: the first being how well the correlation, admittance or coherence can be estimated; the second and most important, how well the lithospheric elastic thickness can be estimated from those. The first type of error is well understood, via classical analyses of resolution, bias and variance in multivariate spectral analysis. Understanding this error leads to constructive approaches of performing the spectral analysis, via multi-taper methods (which reduce variance) with well-chosen optimal tapers (to reduce bias). The second type of error requires a complete analysis of the coupled system of differential equations that describes how certain inputs (the unobservable initial loading by topography at various interfaces) are being mapped to the output (final, measurable topography and gravity anomalies). The equations of flexure have one unknown: the flexural rigidity or effective elastic thickness—the parameter of interest. Fortunately, we have recently come to a full understanding of this second type of error, and derived a maximum-likelihood estimation (MLE) method that results in unbiased and minimum-variance estimates of the flexural rigidity under a variety of initial

  10. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  11. Final Report for Dynamic Models for Causal Analysis of Panel Data. Quality of Maximum Likelihood Estimates of Parameters in a Log-Linear Rate Model. Part III, Chapter 3.

    ERIC Educational Resources Information Center

    Fennell, Mary L.; And Others

    This document is part of a series of chapters described in SO 011 759. This chapter reports the results of Monte Carlo simulations designed to analyze problems of using maximum likelihood estimation (MLE: see SO 011 767) in research models which combine longitudinal and dynamic behavior data in studies of change. Four complications--censoring of…

  12. Quantifying uncertainties in sulphur and nitrogen deposition to wales, uk using the hull acid rain model (harm) within the generalised likelihood uncertainty estimation (glue) framework

    NASA Astrophysics Data System (ADS)

    Page, T.; Whyatt, D.; Beven, K.; Metcalfe, S.; Nicholson, J.

    2003-04-01

    The Hull Acid Rain Model (HARM) is a receptor orientated Lagrangian trajectory model used to represent the processes of emission, transformation and deposition of acidifying species. HARM employs a simplified representation of meteorological conditions. It has a coupled chemical scheme and includes a parameterisation of orographic enhancement which is believed to make a significant contribution to wet deposition in upland UK. An uncertainty analysis of the model using the Generalised Likelihood Uncertainty Estimation methodology (GLUE) is presented, as constrained on multiple observations at 25 sites across Wales, UK. The quantified uncertainty is used to investigate the effect of predictive uncertainty on critical load exceedance. The GLUE analysis comprised 100,000 model realisations from Monte Carlo sampling of pre-specified parameter ranges, which were evaluated on the basis of how well simulated fluxes matched those measured at all observation sites. Only 2101 simulations were deemed to produce adequate representations of the observed fluxes and were used to create weighted prediction bounds for each site. Overall, the uncertainty prediction bounds spanned the observed data satisfactorily for most sites, but there was a tendency for high rainfall sites to be overestimated and sites close to major source areas to be underestimated. For wet-deposited oxidised-N there was a systematic overestimation at the majority of sites. The overestimation of wet-deposited oxidised-N for Wales is in contrast to other regions of the UK, where it is underestimated. The predictive capability of HARM was tested with a ‘hindcast’ to 44 Welsh observation sites for 1984 by using the 2101 acceptable parameterizations for 1995 with a 1984 emissions inventory. The spatial pattern of model predictions was consistent between 1984 and 1995 leading to the conclusion that either: model structural change is required to improve HARMs ability to represent the spatial deposition pattern or

  13. Map-likelihood phasing

    PubMed Central

    Terwilliger, Thomas C.

    2001-01-01

    The recently developed technique of maximum-likelihood density modification [Terwilliger (2000 ▶), Acta Cryst. D56, 965–972] allows a calculation of phase probabilities based on the likelihood of the electron-density map to be carried out separately from the calculation of any prior phase probabilities. Here, it is shown that phase-probability distributions calculated from the map-likelihood function alone can be highly accurate and that they show minimal bias towards the phases used to initiate the calculation. Map-likelihood phase probabilities depend upon expected characteristics of the electron-density map, such as a defined solvent region and expected electron-density distributions within the solvent region and the region occupied by a macromolecule. In the simplest case, map-likelihood phase-probability distributions are largely based on the flatness of the solvent region. Though map-likelihood phases can be calculated without prior phase information, they are greatly enhanced by high-quality starting phases. This leads to the technique of prime-and-switch phasing for removing model bias. In prime-and-switch phasing, biased phases such as those from a model are used to prime or initiate map-likelihood phasing, then final phases are obtained from map-likelihood phasing alone. Map-likelihood phasing can be applied in cases with solvent content as low as 30%. Potential applications of map-likelihood phasing include unbiased phase calculation from molecular-replacement models, iterative model building, unbiased electron-density maps for cases where 2Fo − Fc or σA-weighted maps would currently be used, structure validation and ab initio phase determination from solvent masks, non-crystallographic symmetry or other knowledge about expected electron density. PMID:11717488

  14. Likelihood methods for point processes with refractoriness.

    PubMed

    Citi, Luca; Ba, Demba; Brown, Emery N; Barbieri, Riccardo

    2014-02-01

    Likelihood-based encoding models founded on point processes have received significant attention in the literature because of their ability to reveal the information encoded by spiking neural populations. We propose an approximation to the likelihood of a point-process model of neurons that holds under assumptions about the continuous time process that are physiologically reasonable for neural spike trains: the presence of a refractory period, the predictability of the conditional intensity function, and its integrability. These are properties that apply to a large class of point processes arising in applications other than neuroscience. The proposed approach has several advantages over conventional ones. In particular, one can use standard fitting procedures for generalized linear models based on iteratively reweighted least squares while improving the accuracy of the approximation to the likelihood and reducing bias in the estimation of the parameters of the underlying continuous-time model. As a result, the proposed approach can use a larger bin size to achieve the same accuracy as conventional approaches would with a smaller bin size. This is particularly important when analyzing neural data with high mean and instantaneous firing rates. We demonstrate these claims on simulated and real neural spiking activity. By allowing a substantive increase in the required bin size, our algorithm has the potential to lower the barrier to the use of point-process methods in an increasing number of applications. PMID:24206384

  15. Quasi-likelihood for Spatial Point Processes

    PubMed Central

    Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus

    2014-01-01

    Summary Fitting regression models for intensity functions of spatial point processes is of great interest in ecological and epidemiological studies of association between spatially referenced events and geographical or environmental covariates. When Cox or cluster process models are used to accommodate clustering not accounted for by the available covariates, likelihood based inference becomes computationally cumbersome due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation which in practise is solved numerically. The derivation of the optimal estimating function has close similarities to the derivation of quasi-likelihood for standard data sets. The approximate solution is further equivalent to a quasi-likelihood score for binary spatial data. We therefore use the term quasi-likelihood for our optimal estimating function approach. We demonstrate in a simulation study and a data example that our quasi-likelihood method for spatial point processes is both statistically and computationally efficient. PMID:26041970

  16. Simplifying Likelihood Ratios

    PubMed Central

    McGee, Steven

    2002-01-01

    Likelihood ratios are one of the best measures of diagnostic accuracy, although they are seldom used, because interpreting them requires a calculator to convert back and forth between “probability” and “odds” of disease. This article describes a simpler method of interpreting likelihood ratios, one that avoids calculators, nomograms, and conversions to “odds” of disease. Several examples illustrate how the clinician can use this method to refine diagnostic decisions at the bedside.

  17. The Phylogenetic Likelihood Library

    PubMed Central

    Flouri, T.; Izquierdo-Carrasco, F.; Darriba, D.; Aberer, A.J.; Nguyen, L.-T.; Minh, B.Q.; Von Haeseler, A.; Stamatakis, A.

    2015-01-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2–10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). PMID:25358969

  18. The phylogenetic likelihood library.

    PubMed

    Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A

    2015-03-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). PMID:25358969

  19. Technical Note: Calculation of standard errors of estimates of genetic parameters with the multiple-trait derivative-free restricted maximal likelihood programs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The MTDFREML (Boldman et al., 1995) set of programs was written to handle partially missing data in an expedient manner. When estimating (co)variance components and genetic parameters for multiple trait models, the programs have not been able to estimate standard errors of those estimates for multi...

  20. Likelihood Analysis for Mega Pixel Maps

    NASA Technical Reports Server (NTRS)

    Kogut, Alan J.

    1999-01-01

    The derivation of cosmological parameters from astrophysical data sets routinely involves operations counts which scale as O(N(exp 3) where N is the number of data points. Currently planned missions, including MAP and Planck, will generate sky maps with N(sub d) = 10(exp 6) or more pixels. Simple "brute force" analysis, applied to such mega-pixel data, would require years of computing even on the fastest computers. We describe an algorithm which allows estimation of the likelihood function in the direct pixel basis. The algorithm uses a conjugate gradient approach to evaluate X2 and a geometric approximation to evaluate the determinant. Monte Carlo simulations provide a correction to the determinant, yielding an unbiased estimate of the likelihood surface in an arbitrary region surrounding the likelihood peak. The algorithm requires O(N(sub d)(exp 3/2) operations and O(Nd) storage for each likelihood evaluation, and allows for significant parallel computation.

  1. MetaPIGA v2.0: maximum likelihood large phylogeny estimation using the metapopulation genetic algorithm and other stochastic heuristics

    PubMed Central

    2010-01-01

    Background The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s) but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Results Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood), including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA) together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. Conclusions The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these algorithms. MetaPIGA v2

  2. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  3. Monte Carlo studies of ocean wind vector measurements by SCATT: Objective criteria and maximum likelihood estimates for removal of aliases, and effects of cell size on accuracy of vector winds

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.

    1982-01-01

    The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.

  4. Refining clinical diagnosis with likelihood ratios.

    PubMed

    Grimes, David A; Schulz, Kenneth F

    Likelihood ratios can refine clinical diagnosis on the basis of signs and symptoms; however, they are underused for patients' care. A likelihood ratio is the percentage of ill people with a given test result divided by the percentage of well individuals with the same result. Ideally, abnormal test results should be much more typical in ill individuals than in those who are well (high likelihood ratio) and normal test results should be most frequent in well people than in sick people (low likelihood ratio). Likelihood ratios near unity have little effect on decision-making; by contrast, high or low ratios can greatly shift the clinician's estimate of the probability of disease. Likelihood ratios can be calculated not only for dichotomous (positive or negative) tests but also for tests with multiple levels of results, such as creatine kinase or ventilation-perfusion scans. When combined with an accurate clinical diagnosis, likelihood ratios from ancillary tests improve diagnostic accuracy in a synergistic manner. PMID:15850636

  5. Activity estimation in radioimmunotherapy using magnetic nanoparticles

    PubMed Central

    Rajabi, Hossein; Johari Daha, Fariba

    2015-01-01

    Objective Estimation of activity accumulated in tumor and organs is very important in predicting the response of radiopharmaceuticals treatment. In this study, we synthesized 177Lutetium (177Lu)-trastuzumab-iron oxide nanoparticles as a double radiopharmaceutical agent for treatment and better estimation of organ activity in a new way by magnetic resonance imaging (MRI). Methods 177Lu-trastuzumab-iron oxide nanoparticles were synthesized and all the quality control tests such as labeling yield, nanoparticle size determination, stability in buffer and blood serum up to 4 d, immunoreactivity and biodistribution in normal mice were determined. In mice bearing breast tumor, liver and tumor activities were calculated with three methods: single photon emission computed tomography (SPECT), MRI and organ extraction, which were compared with each other. Results The good results of quality control tests (labeling yield: 61%±2%, mean nanoparticle hydrodynamic size: 41±15 nm, stability in buffer: 86%±5%, stability in blood serum: 80%±3%, immunoreactivity: 80%±2%) indicated that 177Lu-trastuzumab-iron oxide nanoparticles could be used as a double radiopharmaceutical agent in mice bearing tumor. Results showed that 177Lu-trastuzumab-iron oxide nanoparticles with MRI had the ability to measure organ activities more accurate than SPECT. Conclusions Co-conjugating radiopharmaceutical to MRI contrast agents such as iron oxide nanoparticles may be a good way for better dosimetry in nuclear medicine treatment. PMID:25937783

  6. How much to trust the senses: likelihood learning.

    PubMed

    Sato, Yoshiyuki; Kording, Konrad P

    2014-01-01

    Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975

  7. Likelihood approaches for proportional likelihood ratio model with right-censored data.

    PubMed

    Zhu, Hong

    2014-06-30

    Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks. PMID:24500821

  8. Event-Related fMRI Studies of Episodic Encoding and Retrieval: Meta-Analyses Using Activation Likelihood Estimation

    ERIC Educational Resources Information Center

    Spaniol, Julia; Davidson, Patrick S. R.; Kim, Alice S. N.; Han, Hua; Moscovitch, Morris; Grady, Cheryl L.

    2009-01-01

    The recent surge in event-related fMRI studies of episodic memory has generated a wealth of information about the neural correlates of encoding and retrieval processes. However, interpretation of individual studies is hampered by methodological differences, and by the fact that sample sizes are typically small. We submitted results from studies of…

  9. Likelihood and clinical trials.

    PubMed

    Hill, G; Forbes, W; Kozak, J; MacNeill, I

    2000-03-01

    The history of the application of statistical theory to the analysis of clinical trials is reviewed. The current orthodoxy is a somewhat illogical hybrid of the original theory of significance tests of Edgeworth, Karl Pearson, and Fisher, and the subsequent decision theory approach of Neyman, Egon Pearson, and Wald. This hegemony is under threat from Bayesian statisticians. A third approach is that of likelihood, stemming from the work of Fisher and Barnard. This approach is illustrated using hypothetical data from the Lancet articles by Bradford Hill, which introduced clinicians to statistical theory. PMID:10760630

  10. Likelihoods for fixed rank nomination networks.

    PubMed

    Hoff, Peter; Fosdick, Bailey; Volfovsky, Alex; Stovel, Katherine

    2013-12-01

    Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586

  11. Decadal Variation of the Number of El Nino Onsets and El Nino-Related Months and Estimating the Likelihood of El Nino Onset in a Warming World

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2009-01-01

    Examination of the decadal variation of the number of El Nino onsets and El Nino-related months for the interval 1950-2008 clearly shows that the variation is better explained as one expressing normal fluctuation and not one related to global warming. Comparison of the recurrence periods for El Nino onsets against event durations for moderate/strong El Nino events results in a statistically important relationship that allows for the possible prediction of the onset for the next anticipated El Nino event. Because the last known El Nino was a moderate event of short duration (6 months), having onset in August 2006, unless it is a statistical outlier, one expects the next onset of El Nino probably in the latter half of 2009, with peak following in November 2009-January 2010. If true, then initial early extended forecasts of frequencies of tropical cyclones for the 2009 North Atlantic basin hurricane season probably should be revised slightly downward from near average-to-above average numbers to near average-to-below average numbers of tropical cyclones in 2009, especially as compared to averages since 1995, the beginning of the current high-activity interval for tropical cyclone activity.

  12. Maximum likelihood clustering with dependent feature trees

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.

  13. A Monte Carlo comparison of the recovery of winds near upwind and downwind from the SASS-1 model function by means of the sum of squares algorithm and a maximum likelihood estimator

    NASA Technical Reports Server (NTRS)

    Pierson, W. J., Jr.

    1984-01-01

    Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.

  14. Accuracy of highly sexually active gay and bisexual men's predictions of their daily likelihood of anal sex and its relevance for intermittent event-driven HIV Pre-Exposure Prophylaxis

    PubMed Central

    Parsons, Jeffrey T.; Rendina, H. Jonathon; Grov, Christian; Ventuneac, Ana; Mustanski, Brian

    2014-01-01

    Objective We sought to examine highly sexually active gay and bisexual men's accuracy in predicting their sexual behavior for the purposes of informing future research on intermittent, event-driven HIV Pre-Exposure Prophylaxis (PrEP). Design For 30 days, 92 HIV-negative men completed a daily survey about their sexual behavior (n = 1,688 days of data) and indicated their likelihood of having anal sex with a casual male partner the following day. Method We utilized multilevel modeling to analyze the association between self-reported likelihood of and subsequent engagement in anal sex. Results We found a linear association between men's reported likelihood of anal sex with casual partners and the actual probability of engaging in sex, though men overestimated the likelihood of sex. Overall, we found that men were better at predicting when they would not have sex than when they would, particularly if any likelihood value greater than 0% was treated as indicative that sex might occur. We found no evidence that men's accuracy of prediction was affected by whether it was a weekend or whether they were using substances, though both did increase the probability of sex. Discussion These results suggested that, were men taking event-driven intermittent PrEP, 14% of doses could have been safely skipped with a minimal rate of false negatives using guidelines of taking a dose unless there was no chance (i.e., 0% likelihood) of sex on the following day. This would result in a savings of over $1,300 per year in medication costs per participant. PMID:25559594

  15. Augmented Likelihood Image Reconstruction.

    PubMed

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction. PMID:26208310

  16. NON-REGULAR MAXIMUM LIKELIHOOD ESTIMATION

    EPA Science Inventory

    Even though a body of data on the environmental occurrence of medicinal, government-approved ("ethical") pharmaceuticals has been growing over the last two decades (the subject of this book), nearly nothing is known about the disposition of illicit (illegal) drugs in th...

  17. Estimating ROI activity concentration with photon-processing and photon-counting SPECT imaging systems

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Recently a new class of imaging systems, referred to as photon-processing (PP) systems, are being developed that uses real-time maximum-likelihood (ML) methods to estimate multiple attributes per detected photon and store these attributes in a list format. PP systems could have a number of potential advantages compared to systems that bin photons based on attributes such as energy, projection angle, and position, referred to as photon-counting (PC) systems. For example, PP systems do not suffer from binning-related information loss and provide the potential to extract information from attributes such as energy deposited by the detected photon. To quantify the effects of this advantage on task performance, objective evaluation studies are required. We performed this study in the context of quantitative 2-dimensional single-photon emission computed tomography (SPECT) imaging with the end task of estimating the mean activity concentration within a region of interest (ROI). We first theoretically outline the effect of null space on estimating the mean activity concentration, and argue that due to this effect, PP systems could have better estimation performance compared to PC systems with noise-free data. To evaluate the performance of PP and PC systems with noisy data, we developed a singular value decomposition (SVD)-based analytic method to estimate the activity concentration from PP systems. Using simulations, we studied the accuracy and precision of this technique in estimating the activity concentration. We used this framework to objectively compare PP and PC systems on the activity concentration estimation task. We investigated the effects of varying the size of the ROI and varying the number of bins for the attribute corresponding to the angular orientation of the detector in a continuously rotating SPECT system. The results indicate that in several cases, PP systems offer improved estimation performance compared to PC systems.

  18. Stepwise Signal Extraction via Marginal Likelihood

    PubMed Central

    Du, Chao; Kao, Chu-Lan Michael

    2015-01-01

    This paper studies the estimation of stepwise signal. To determine the number and locations of change-points of the stepwise signal, we formulate a maximum marginal likelihood estimator, which can be computed with a quadratic cost using dynamic programming. We carry out extensive investigation on the choice of the prior distribution and study the asymptotic properties of the maximum marginal likelihood estimator. We propose to treat each possible set of change-points equally and adopt an empirical Bayes approach to specify the prior distribution of segment parameters. Detailed simulation study is performed to compare the effectiveness of this method with other existing methods. We demonstrate our method on single-molecule enzyme reaction data and on DNA array CGH data. Our study shows that this method is applicable to a wide range of models and offers appealing results in practice. PMID:27212739

  19. Sedentary behavior, physical activity, and likelihood of breast cancer among black and white women: a report from the Southern Community Cohort Study

    PubMed Central

    Cohen, Sarah S.; Matthews, Charles E.; Bradshaw, Patrick T.; Lipworth, Loren; Buchowski, Maciej S.; Signorello, Lisa B.; Blot, William J.

    2013-01-01

    Increased physical activity has been shown to be protective for breast cancer although few studies have examined this association in black women. In addition, limited evidence to date indicates that sedentary behavior may be an independent risk factor for breast cancer. We examined sedentary behavior and physical activity in relation to subsequent incident breast cancer in a nested case-control study within 546 cases (374 among black women) and 2,184 matched controls enrolled in the Southern Community Cohort Study. Sedentary and physically active behaviors were assessed via self-report at study baseline (2002–2009) using a validated physical activity questionnaire. Conditional logistic regression was used to estimate mutually adjusted odds ratios (OR) and corresponding 95% confidence intervals (CI) for quartiles of sedentary and physical activity measures in relation to breast cancer risk. Being in the highest versus lowest quartile of total sedentary behavior (≥12 hours/day versus <5.5 hours/day) was associated with increased odds of breast cancer among white women (OR=1.94 [95% CI 1.01–3.70], p for trend=0.1) but not black women (OR=1.23 [95% CI 0.82–1.83], p for trend=0.6) after adjustment for physical activity. After adjustment for sedentary activity, greater physical activity was associated with reduced odds for breast cancer among white women (p for trend=0.03) only. In conclusion, independent of one another, sedentary behavior and physical activity are risk factors for breast cancer among white women. Differences in these associations between black and white women require further investigation. Reducing sedentary behavior and increasing physical activity are potentially independent targets for breast cancer prevention interventions. PMID:23576427

  20. Revised activation estimates for silicon carbide

    SciTech Connect

    Heinisch, H.L.; Cheng, E.T.; Mann, F.M.

    1996-10-01

    Recent progress in nuclear data development for fusion energy systems includes a reevaluation of neutron activation cross sections for silicon and aluminum. Activation calculations using the newly compiled Fusion Evaluated Nuclear Data Library result in calculated levels of {sup 26}Al in irradiated silicon that are about an order of magnitude lower than the earlier calculated values. Thus, according to the latest internationally accepted nuclear data, SiC is much more attractive as a low activation material, even in first wall applications.

  1. Maximum-likelihood density modification

    PubMed Central

    Terwilliger, Thomas C.

    2000-01-01

    A likelihood-based approach to density modification is developed that can be applied to a wide variety of cases where some information about the electron density at various points in the unit cell is available. The key to the approach consists of developing likelihood functions that represent the probability that a particular value of electron density is consistent with prior expectations for the electron density at that point in the unit cell. These likelihood functions are then combined with likelihood functions based on experimental observations and with others containing any prior knowledge about structure factors to form a combined likelihood function for each structure factor. A simple and general approach to maximizing the combined likelihood function is developed. It is found that this likelihood-based approach yields greater phase improvement in model and real test cases than either conventional solvent flattening and histogram matching or a recent reciprocal-space solvent-flattening procedure [Terwilliger (1999 ▶), Acta Cryst. D55, 1863–1871]. PMID:10944333

  2. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  3. Maximum likelihood topographic map formation.

    PubMed

    Van Hulle, Marc M

    2005-03-01

    We introduce a new unsupervised learning algorithm for kernel-based topographic map formation of heteroscedastic gaussian mixtures that allows for a unified account of distortion error (vector quantization), log-likelihood, and Kullback-Leibler divergence. PMID:15802004

  4. Estimates of activation in arterial smooth muscle.

    PubMed

    Singer, H A; Kamm, K E; Murphy, R A

    1986-09-01

    We have previously described the onset of a "latch" state in the swine carotid media after K+ depolarization. This state was characterized by maintained stress after a decrease in shortening velocities and in the level of cross-bridge phosphorylation. The present experiments were designed to determine whether there were changes in other mechanical properties in swine carotid media associated with the onset of the latch state. Medial strips (less than 500 microM thick), incubated in physiological salt solution (PSS) at 37 degrees C at their optimal length (Lo), were subjected to ramp stretches (5.86 mm/s) of 5% Lo. The active stress (Sa) response to stretch was computed by subtraction of the passive element contribution (as determined from identical stretches after 30 min incubation in Ca2+-free PSS) from the total response in the activated muscle. Transitions in the total and active stress responses to stretch were observed in strips stimulated with 109 mM K+ for 1 min or longer and were interpreted as yielding of the contractile apparatus. Active dynamic stiffness (dS/dLo) calculated from the initial 1% Lo portion of the stretch response, correlated linearly with active stress over a wide range. Maximal stress and dynamic stiffness were reached by 1 min and were maintained for at least 30 min in K+-depolarized preparations. However, yield stress increased significantly between 1 and 10 min, and there was a large increase in the length at which yield was observed (1.09 +/- 0.06 to 1.86 +/- 0.10% Lo; n = 9). These increases were maintained between 10 and 30 min.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:3752237

  5. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  6. Estimating phytoplankton photosynthesis by active fluorescence

    SciTech Connect

    Falkowski, P.G.; Kolber, Z.

    1992-01-01

    Photosynthesis can be described by target theory, At low photon flux densities, photosynthesis is a linear function of irradiance (I), The number of reaction centers (n), their effective absorption capture cross section {sigma}, and a quantum yield {phi}. As photosynthesis becomes increasingly light saturated, an increased fraction of reaction centers close. At light saturation the maximum photosynthetic rate is given as the product of the number of reaction centers (n) and their maximum electron transport rate (I/{tau}). Using active fluorometry it is possible to measure non-destructively and in real time the fraction of open or closed reaction centers under ambient irradiance conditions in situ, as well as {sigma} and {phi} {tau} can be readily, calculated from knowledge of the light saturation parameter, I{sub k} (which can be deduced by in situ by active fluorescence measurements) and {sigma}. We built a pump and probe fluorometer, which is interfaced with a CTD. The instrument measures the fluorescence yield of a weak probe flash preceding (f{sub 0}) and succeeding (f{sub 0}) a saturating pump flash. Profiles of the these fluorescence yields are used to derive the instantaneous rate of gross photosynthesis in natural phytoplankton communities without any incubation. Correlations with short-term simulated in situ radiocarbon measurements are extremely high. The average slope between photosynthesis derived from fluorescence and that measured by radiocarbon is 1.15 and corresponds to the average photosynthetic quotient. The intercept is about 15% of the maximum radiocarbon uptake and corresponds to the average net community respiration. Profiles of photosynthesis and sections showing the variability in its composite parameters reveal a significant effect of nutrient availability on biomass specific rates of photosynthesis in the ocean.

  7. Estimating phytoplankton photosynthesis by active fluorescence

    SciTech Connect

    Falkowski, P.G.; Kolber, Z.

    1992-10-01

    Photosynthesis can be described by target theory, At low photon flux densities, photosynthesis is a linear function of irradiance (I), The number of reaction centers (n), their effective absorption capture cross section {sigma}, and a quantum yield {phi}. As photosynthesis becomes increasingly light saturated, an increased fraction of reaction centers close. At light saturation the maximum photosynthetic rate is given as the product of the number of reaction centers (n) and their maximum electron transport rate (I/{tau}). Using active fluorometry it is possible to measure non-destructively and in real time the fraction of open or closed reaction centers under ambient irradiance conditions in situ, as well as {sigma} and {phi} {tau} can be readily, calculated from knowledge of the light saturation parameter, I{sub k} (which can be deduced by in situ by active fluorescence measurements) and {sigma}. We built a pump and probe fluorometer, which is interfaced with a CTD. The instrument measures the fluorescence yield of a weak probe flash preceding (f{sub 0}) and succeeding (f{sub 0}) a saturating pump flash. Profiles of the these fluorescence yields are used to derive the instantaneous rate of gross photosynthesis in natural phytoplankton communities without any incubation. Correlations with short-term simulated in situ radiocarbon measurements are extremely high. The average slope between photosynthesis derived from fluorescence and that measured by radiocarbon is 1.15 and corresponds to the average photosynthetic quotient. The intercept is about 15% of the maximum radiocarbon uptake and corresponds to the average net community respiration. Profiles of photosynthesis and sections showing the variability in its composite parameters reveal a significant effect of nutrient availability on biomass specific rates of photosynthesis in the ocean.

  8. Modified maximum likelihood registration based on information fusion

    NASA Astrophysics Data System (ADS)

    Qi, Yongqing; Jing, Zhongliang; Hu, Shiqiang

    2007-11-01

    The bias estimation of passive sensors is considered based on information fusion in multi-platform multi-sensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.

  9. Be the Volume: A Classroom Activity to Visualize Volume Estimation

    ERIC Educational Resources Information Center

    Mikhaylov, Jessica

    2011-01-01

    A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…

  10. A Survey of the Likelihood Approach to Bioequivalence Trials

    PubMed Central

    Choi, Leena; Caffo, Brian; Rohde, Charles

    2009-01-01

    SUMMARY Bioequivalence trials are abbreviated clinical trials whereby a generic drug or new formulation is evaluated to determine if it is “equivalent” to a corresponding previously approved brand-name drug or formulation. In this manuscript, we survey the process of testing bioequivalence and advocate the likelihood paradigm for representing the resulting data as evidence. We emphasize the unique conflicts between hypothesis testing and confidence intervals in this area - which we believe are indicative of the existence of the systemic defects in the frequentist approach - that the likelihood paradigm avoids. We suggest the direct use of profile likelihoods for evaluating bioequivalence. We discuss how the likelihood approach is useful to present the evidence for both average and population bioequivalence within a unified framework. We also examine the main properties of profile likelihoods and estimated likelihoods under simulation. This simulation study shows that profile likelihoods offer a viable alternative to the (unknown) true likelihood for a range of parameters commensurate with bioequivalence research. PMID:18618422

  11. On the likelihood of forests

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  12. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    NASA Astrophysics Data System (ADS)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-10-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle.

  13. Maximum likelihood continuity mapping for fraud detection

    SciTech Connect

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  14. Model-free linkage analysis using likelihoods

    SciTech Connect

    Curtis, D.; Sham, P.C.

    1995-09-01

    Misspecification of transmission model parameters can produce artifactually lod scores at small recombination fractions and in multipoint analysis. To avoid this problem, we have tried to devise a test that aims to detect a genetic effect at a particular locus, rather than attempting to estimate the map position of a locus with specified effect. Maximizing likelihoods over transmission model parameters, as well as linkage parameters, can produce seriously biased parameter estimates and so yield tests that lack power for the detection of linkage. However, constraining the transmission model parameters to produce the correct population prevalence largely avoids this problem. For computational convenience, we recommend that the likelihoods under linkage and nonlinkage are independently maximized over a limited set of transmission models, ranging from Mendelian dominant to null effect and from null effect to Mendelian recessive. In order to test for a genetic effect at a given map position, the likelihood under linkage is maximized over admixture, the proportion of families linked. Application to simulated data for a wide range of transmission models in both affected sib pairs and pedigrees demonstrates that the new method is well behaved under the null hypothesis and provides a powerful test for linkage when it is present. This test requires no specification of transmission model parameters, apart from an approximate estimate of the population prevalence. It can be applied equally to sib pairs and pedigrees, and, since it does not diminish the lod score at test positions very close to a marker, it is suitable for application to multipoint data. 24 refs., 1 fig., 4 tabs.

  15. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  16. LRG DR7 Likelihood Software

    NASA Astrophysics Data System (ADS)

    Reid, Beth A.

    2013-06-01

    This software computes likelihoods for the Luminous Red Galaxies (LRG) data from the Sloan Digital Sky Survey (SDSS). It includes a patch to the existing CAMB software (the February 2009 release) to calculate the theoretical LRG halo power spectrum for various models. The code is written in Fortran 90 and has been tested with the Intel Fortran 90 and GFortran compilers.

  17. The Use of Kernel Density Estimation to Examine Associations between Neighborhood Destination Intensity and Walking and Physical Activity

    PubMed Central

    King, Tania L.; Thornton, Lukar E.; Bentley, Rebecca J.; Kavanagh, Anne M.

    2015-01-01

    Background Local destinations have previously been shown to be associated with higher levels of both physical activity and walking, but little is known about how the distribution of destinations is related to activity. Kernel density estimation is a spatial analysis technique that accounts for the location of features relative to each other. Using kernel density estimation, this study sought to investigate whether individuals who live near destinations (shops and service facilities) that are more intensely distributed rather than dispersed: 1) have higher odds of being sufficiently active; 2) engage in more frequent walking for transport and recreation. Methods The sample consisted of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Destinations within these areas were geocoded and kernel density estimates of destination intensity were created using kernels of 400m (meters), 800m and 1200m. Using multilevel logistic regression, the association between destination intensity (classified in quintiles Q1(least)—Q5(most)) and likelihood of: 1) being sufficiently active (compared to insufficiently active); 2) walking≥4/week (at least 4 times per week, compared to walking less), was estimated in models that were adjusted for potential confounders. Results For all kernel distances, there was a significantly greater likelihood of walking≥4/week, among respondents living in areas of greatest destinations intensity compared to areas with least destination intensity: 400m (Q4 OR 1.41 95%CI 1.02–1.96; Q5 OR 1.49 95%CI 1.06–2.09), 800m (Q4 OR 1.55, 95%CI 1.09–2.21; Q5, OR 1.71, 95%CI 1.18–2.48) and 1200m (Q4, OR 1.7, 95%CI 1.18–2.45; Q5, OR 1.86 95%CI 1.28–2.71). There was also evidence of associations between destination intensity and sufficient physical activity, however these associations were markedly attenuated when walking was included in the models. Conclusions This study, conducted within urban Melbourne, found that those who lived

  18. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  19. Sensor registration using airlanes: maximum likelihood solution

    NASA Astrophysics Data System (ADS)

    Ong, Hwa-Tung

    2004-01-01

    In this contribution, the maximum likelihood estimation of sensor registration parameters, such as range, azimuth and elevation biases in radar measurements, using airlane information is proposed and studied. The motivation for using airlane information for sensor registration is that it is freely available as a source of reference and it provides an alternative to conventional techniques that rely on synchronised and correctly associated measurements from two or more sensors. In the paper, the problem is first formulated in terms of a measurement model that is a nonlinear function of the unknown target state and sensor parameters, plus sensor noise. A probabilistic model of the target state is developed based on airlane information. The maximum likelihood and also maximum a posteriori solutions are given. The Cramer-Rao lower bound is derived and simulation results are presented for the case of estimating the biases in radar range, azimuth and elevation measurements. The accuracy of the proposed method is compared against the Cramer-Rao lower bound and that of an existing two-sensor alignment method. It is concluded that sensor registration using airlane information is a feasible alternative to existing techniques.

  20. Sensor registration using airlanes: maximum likelihood solution

    NASA Astrophysics Data System (ADS)

    Ong, Hwa-Tung

    2003-12-01

    In this contribution, the maximum likelihood estimation of sensor registration parameters, such as range, azimuth and elevation biases in radar measurements, using airlane information is proposed and studied. The motivation for using airlane information for sensor registration is that it is freely available as a source of reference and it provides an alternative to conventional techniques that rely on synchronised and correctly associated measurements from two or more sensors. In the paper, the problem is first formulated in terms of a measurement model that is a nonlinear function of the unknown target state and sensor parameters, plus sensor noise. A probabilistic model of the target state is developed based on airlane information. The maximum likelihood and also maximum a posteriori solutions are given. The Cramer-Rao lower bound is derived and simulation results are presented for the case of estimating the biases in radar range, azimuth and elevation measurements. The accuracy of the proposed method is compared against the Cramer-Rao lower bound and that of an existing two-sensor alignment method. It is concluded that sensor registration using airlane information is a feasible alternative to existing techniques.

  1. COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS

    SciTech Connect

    Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.; Næss, S. K.; Seljebotn, D. S.; Górski, K. M.; Huey, G.; Jewell, J. B.; Rocha, G.; Wehus, I. K.

    2013-11-10

    We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expression that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.

  2. Targeted maximum likelihood based causal inference: Part I.

    PubMed

    van der Laan, Mark J

    2010-01-01

    Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions. In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is

  3. Targeted Maximum Likelihood Based Causal Inference: Part I

    PubMed Central

    van der Laan, Mark J.

    2010-01-01

    Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions. In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is

  4. Human ECG signal parameters estimation during controlled physical activity

    NASA Astrophysics Data System (ADS)

    Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz

    2015-09-01

    ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.

  5. A hybrid likelihood algorithm for risk modelling.

    PubMed

    Kellerer, A M; Kreisheimer, M; Chmelevsky, D; Barclay, D

    1995-03-01

    The risk of radiation-induced cancer is assessed through the follow-up of large cohorts, such as atomic bomb survivors or underground miners who have been occupationally exposed to radon and its decay products. The models relate to the dose, age and time dependence of the excess tumour rates, and they contain parameters that are estimated in terms of maximum likelihood computations. The computations are performed with the software package EPI-CURE, which contains the two main options of person-by person regression or of Poisson regression with grouped data. The Poisson regression is most frequently employed, but there are certain models that require an excessive number of cells when grouped data are used. One example involves computations that account explicitly for the temporal distribution of continuous exposures, as they occur with underground miners. In past work such models had to be approximated, but it is shown here that they can be treated explicitly in a suitably reformulated person-by person computation of the likelihood. The algorithm uses the familiar partitioning of the log-likelihood into two terms, L1 and L0. The first term, L1, represents the contribution of the 'events' (tumours). It needs to be evaluated in the usual way, but constitutes no computational problem. The second term, L0, represents the event-free periods of observation. It is, in its usual form, unmanageable for large cohorts. However, it can be reduced to a simple form, in which the number of computational steps is independent of cohort size. The method requires less computing time and computer memory, but more importantly it leads to more stable numerical results by obviating the need for grouping the data. The algorithm may be most relevant to radiation risk modelling, but it can facilitate the modelling of failure-time data in general. PMID:7604154

  6. EIA Corrects Errors in Its Drilling Activity Estimates Series

    EIA Publications

    1998-01-01

    The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.

  7. EIA Completes Corrections to Drilling Activity Estimates Series

    EIA Publications

    1999-01-01

    The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.

  8. Active learning applied for photometric redshift estimation of quasars

    NASA Astrophysics Data System (ADS)

    Han, Bo; Zhang, Yanxia; Zhao, Yongheng

    2015-08-01

    For a long time the quasars’ photometric redshifts have been estimated by learning from all available training dataset. In the scenario of big data, the amount of available data is huge and the dataset may include noise. Consequently, a major research challenge is to design a learning process that gains the most informative data from the available dataset in terms of optimal learning of the underlying relationships. By filtering out noisy data and redundant data, the optimal learning can improve both estimation accuracy and speed. Towards this objective, in this study we figure out an active learning approach that automatically learns a series of suppport vector regression models based on small size of different sampling data chunks. These models are applied on a validation dataset. By active learning, those validation data with estimation results vary in a certain range are regarded as the informative data and are aggregated in multiple training datasets. Next, the aggregated training datasets are combined into an ensemble estimator through averaging and then applied on a test dataset. Our experimental results on SDSS data show that the proposed method is helpful to improve quasars’ photometric redshift estimation accuracy.

  9. Estimating evaporative vapor generation from automobiles based on parking activities.

    PubMed

    Dong, Xinyi; Tschantz, Michael; Fu, Joshua S

    2015-07-01

    A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity. PMID:25818089

  10. Total myrosinase activity estimates in brassica vegetable produce.

    PubMed

    Dosz, Edward B; Ku, Kang-Mo; Juvik, John A; Jeffery, Elizabeth H

    2014-08-13

    Isothiocyanates, generated from the hydrolysis of glucosinolates in plants of the Brassicaceae family, promote health, including anticancer bioactivity. Hydrolysis requires the plant enzyme myrosinase, giving myrosinase a key role in health promotion by brassica vegetables. Myrosinase measurement typically involves isolating crude protein, potentially underestimating activity in whole foods. Myrosinase activity was estimated using unextracted fresh tissues of five broccoli and three kale cultivars, measuring the formation of allyl isothiocyanate (AITC) and/or glucose from exogenous sinigrin. A correlation between AITC and glucose formation was found, although activity was substantially lower measured as glucose release. Using exogenous sinigrin or endogenous glucoraphanin, concentrations of the hydrolysis products AITC and sulforaphane correlated (r = 0.859; p = 0.006), suggesting that broccoli shows no myrosinase selectivity among sinigrin and glucoraphanin. Measurement of AITC formation provides a novel, reliable estimation of myrosinase-dependent isothiocyanate formation suitable for use with whole vegetable food samples. PMID:25051514

  11. Estimation of Evapotranspiration as a function of Photosynthetic Active Radiation

    NASA Astrophysics Data System (ADS)

    Wesley, E.; Migliaccio, K.; Judge, J.

    2012-12-01

    The purpose of this research project is to more accurately measure the water balance and energy movements to properly allocate water resources at the Snapper Creek Site in Miami-Dade County, FL, by quantifying and estimating evapotranspiration (ET). ET is generally estimated using weather based equations, this project focused on estimating ET as a function of Photosynthetic Active Radiation (PAR). The project objectives were first to compose a function of PAR and calculated coefficients that can accurately estimate daily ET values with the least amount of variables used in its estimation equation, and second, to compare the newly identified ET estimation PAR function to TURC estimations, in comparison to our actual Eddy Covariance (EC) ET data and determine the differences in ET values. PAR, volumetric water content (VWC), and temperature (T) data were quality checked and used in developing singular and multiple variable regression models fit with SigmaPlot software. Fifteen different ET estimation equations were evaluated against EC ET and TURC estimated ET using R2 and slope factors. The selected equation that best estimated EC ET was cross validated using a 5 month data set; its daily and monthly ET values and sums were compared against the commonly used TURC equation. Using a multiple variable regression model, an equation with three variables (i.e., VWC, T, and PAR) was identified that best fit EC ET daily data. However, a regression was also found that used only PAR and provided ET predictions of similar accuracy. The PAR based regression model predicted daily EC ET more accurately than the traditional TURC method. Using only PAR to estimate ET reduces the input variables as compared to using the TURC model which requires T and solar radiation. Thus, not only is the PAR approach more accurate but also more cost effective. The PAR-based ET estimation equation derived in this study may be over fit considering only 5 months of data were used to produce the PAR

  12. Improved maximum likelihood reconstruction of complex multi-generational pedigrees.

    PubMed

    Sheehan, Nuala A; Bartlett, Mark; Cussens, James

    2014-11-01

    The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as

  13. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  14. On-line, adaptive state estimator for active noise control

    NASA Technical Reports Server (NTRS)

    Lim, Tae W.

    1994-01-01

    Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the active noise control system using feedback control. This research investigates the development of an adaptive, on-line state estimator using a neural network concept to conduct active noise control. In this research, an algorithm has been developed that can be used to estimate displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are estimated with reasonable accuracy at any locations on the structure. The accuracy of the response estimates depends on the number of modes incorporated in the estimates and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical active noise control

  15. MARGINAL EMPIRICAL LIKELIHOOD AND SURE INDEPENDENCE FEATURE SCREENING

    PubMed Central

    Chang, Jinyuan; Tang, Cheng Yong; Wu, Yichao

    2013-01-01

    We study a marginal empirical likelihood approach in scenarios when the number of variables grows exponentially with the sample size. The marginal empirical likelihood ratios as functions of the parameters of interest are systematically examined, and we find that the marginal empirical likelihood ratio evaluated at zero can be used to differentiate whether an explanatory variable is contributing to a response variable or not. Based on this finding, we propose a unified feature screening procedure for linear models and the generalized linear models. Different from most existing feature screening approaches that rely on the magnitudes of some marginal estimators to identify true signals, the proposed screening approach is capable of further incorporating the level of uncertainties of such estimators. Such a merit inherits the self-studentization property of the empirical likelihood approach, and extends the insights of existing feature screening methods. Moreover, we show that our screening approach is less restrictive to distributional assumptions, and can be conveniently adapted to be applied in a broad range of scenarios such as models specified using general moment conditions. Our theoretical results and extensive numerical examples by simulations and data analysis demonstrate the merits of the marginal empirical likelihood approach. PMID:24415808

  16. Estimation of restraint stress in rats using salivary amylase activity.

    PubMed

    Matsuura, Tetsuya; Takimura, Ryo; Yamaguchi, Masaki; Ichinose, Mitsuyuki

    2012-09-01

    The rat is an ideal model animal for studying physical and psychological stresses. Recent human studies have shown that salivary amylase activity is a useful biomarker of stress in our social life. To estimate the usefulness of amylase activity as a biomarker of stress in rats, we analyzed changes in physiological parameters including amylase activity and anatomical variables, which were induced by a mild restraint of paws (10 min, 3 times/week, 9 weeks). The quantities of food and water intake and excretion amount of the stress rats were smaller than those of the control rats during the experimental period (5-13 weeks). The body weight of the stress rats decreased compared with that of the control rats. Moreover, the enlargement of the adrenal gland was confirmed in the stress rats, indicating that the mild restraint caused a chronic stress response. The amylase activities of the stress rats were significantly greater than those of the control rats at 5 weeks of age. However, the amylase activity of the stress rats decreased compared with that of the control rats after 6 weeks of age. These results indicate that amylase activity is increased by acute stress and reduced by chronic stress, which is caused by repeated restraint stress. In conclusion, amylase activity is a useful biomarker of acute and chronic stresses in rats. PMID:22753135

  17. Expressed Likelihood as Motivator: Creating Value through Engaging What’s Real

    PubMed Central

    Higgins, E. Tory; Franks, Becca; Pavarini, Dana; Sehnert, Steen; Manley, Katie

    2012-01-01

    Our research tested two predictions regarding how likelihood can have motivational effects as a function of how a probability is expressed. We predicted that describing the probability of a future event that could be either A or B using the language of high likelihood (“80% A”) rather than low likelihood (“20% B”), i.e., high rather than low expressed likelihood, would make a present activity more real and engaging, as long as the future event had properties relevant to the present activity. We also predicted that strengthening engagement from the high (vs. low) expressed likelihood of a future event would intensify the value of present positive and negative objects (in opposite directions). Both predictions were supported. There was also evidence that this intensification effect from expressed likelihood was independent of the actual probability or valence of the future event. What mattered was whether high versus low likelihood language was used to describe the future event. PMID:23940411

  18. Likelihood of volcanic eruption at Long Valley, California, is reduced

    USGS Publications Warehouse

    Kelly, D.

    1984-01-01

    A relatively low level of earthquake activity as well as reduced rates of ground deformation over the past year have led U.S Geological Survey scientists to conclude that the likelihood of imminent volcanic activity at Long Valley, California, is reduced from that of mid-1982 through 1983.

  19. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  20. A maximum likelihood framework for protein design

    PubMed Central

    Kleinman, Claudia L; Rodrigue, Nicolas; Bonnard, Cécile; Philippe, Hervé; Lartillot, Nicolas

    2006-01-01

    Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces shaping protein sequences, and

  1. Why keep the pressure to estimate prokaryotic activities? (Invited)

    NASA Astrophysics Data System (ADS)

    Tamburini, C.

    2013-12-01

    Recent discoveries challenge the paradigm that cycling of organic matter is slow in the deep sea and mediated by microbial food webs of static structure and function. Data showing spatial variation in prokaryotic abundance and activity support the hypothesis that deep-sea microorganisms respond dynamically to variations in organic matter input to the bathypelagic realm. Moreover, almost half of the total water column heterotrophic prokaryotic production takes place below the epipelagic layer. Compiled global budgets suggest that the estimate of metabolic activity in the dark pelagic ocean exceeds the input of organic carbon. However, these conclusions are based mainly on measurements done at atmospheric pressure without taking into account pressure effects on natural prokaryotic assemblages. In this presentation, I will clarify the effect of hydrostatic pressure on prokaryotes living in the dark ocean and inform experimental design and the achievement of more accurate estimates of microbial activity in the deep ocean. Finally, their potential capabilities to degrade complex compounds as well as the chemolithoautrophy in the deep ocean represent examples of ways to explore deeper the role of deep-sea prokaryotes in the global cycles.

  2. Multiscale likelihood analysis and image reconstruction

    NASA Astrophysics Data System (ADS)

    Willett, Rebecca M.; Nowak, Robert D.

    2003-11-01

    The nonparametric multiscale polynomial and platelet methods presented here are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these methods are both well suited to processing Poisson or multinomial data and capable of preserving image edges. At the heart of these new methods lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on what the authors call platelets in two dimensions. Platelets are localized functions at various positions, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Polynomial methods offer near minimax convergence rates for broad classes of functions including Besov spaces. Upper bounds on the estimation error are derived using an information-theoretic risk bound based on squared Hellinger loss. Simulations establish the practical effectiveness of these methods in applications such as density estimation, medical imaging, and astronomy.

  3. Influences of amount of pedigree information on computing time and of model assumptions on restricted maximum-likelihood estimates of population parameters in Swiss black-brown mountain sheep.

    PubMed

    Hagger, C; Schneeberger, M

    1995-08-01

    Average daily gain between birth and 30 d of age of 42,644 lambs of Swiss Black-Brown Mountain Sheep were used in this analysis. The influence of amount of pedigree information on computing time and on REML estimates of population parameters was investigated on a subset of 7,848 lambs. If all available pedigree information was used, 89.4% of the lambs had at least four complete generations of known ancestors. For the reduced pedigree information, only parents and grandparents of a lamb were included. In the data set with complete pedigree information, 2,616 additional animals (without records) caused 19.4% more equations, 4.6 times the number of non-zero elements in the system of equations, and 21.2 times the computing time to reach convergence. The difference in amount of pedigree information had only a marginal influence on the estimates of direct heritability (h2), maternal heritability (m2), permanent environmental effects, and on the genetic correlation between direct and maternal effect (rAM). The complete data set of 42,644 recorded lambs was randomly split into four subsets to save computing time. In a fifth subset of 27,787 lambs (Set C) all combinations of recorded grandparents and grand-offspring were accumulated because they contain information on the covariance between direct and maternal effects (cov[AM]). Including cov(AM) in the model assumptions increased estimates of h2 and m2 in all subsets. Estimates from Set C were smallest but showed the same trend. The estimate of rAM was always strongly negative, < or = -.64.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:8567455

  4. Fast inference in generalized linear models via expected log-likelihoods

    PubMed Central

    Ramirez, Alexandro D.; Paninski, Liam

    2015-01-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289

  5. Physical Activity in Vietnam: Estimates and Measurement Issues

    PubMed Central

    Bui, Tan Van; Blizzard, Christopher Leigh; Luong, Khue Ngoc; Truong, Ngoc Le Van; Tran, Bao Quoc; Otahal, Petr; Srikanth, Velandai; Nelson, Mark Raymond; Au, Thuy Bich; Ha, Son Thai; Phung, Hai Ngoc; Tran, Mai Hoang; Callisaya, Michele; Gall, Seana

    2015-01-01

    Introduction Our aims were to provide the first national estimates of physical activity (PA) for Vietnam, and to investigate issues affecting their accuracy. Methods Measurements were made using the Global Physical Activity Questionnaire (GPAQ) on a nationally-representative sample of 14706 participants (46.5% males, response 64.1%) aged 25−64 years selected by multi-stage stratified cluster sampling. Results Approximately 20% of Vietnamese people had no measureable PA during a typical week, but 72.9% (men) and 69.1% (women) met WHO recommendations for PA by adults for their age. On average, 52.0 (men) and 28.0 (women) Metabolic Equivalent Task (MET)-hours/week (largely from work activities) were reported. Work and total PA were higher in rural areas and varied by season. Less than 2% of respondents provided incomplete information, but an additional one-in-six provided unrealistically high values of PA. Those responsible for reporting errors included persons from rural areas and all those with unstable work patterns. Box-Cox transformation (with an appropriate constant added) was the most successful method of reducing the influence of large values, but energy-scaled values were most strongly associated with pathophysiological outcomes. Conclusions Around seven-in-ten Vietnamese people aged 25–64 years met WHO recommendations for total PA, which was mainly from work activities and higher in rural areas. Nearly all respondents were able to report their activity using the GPAQ, but with some exaggerated values and seasonal variation in reporting. Data transformation provided plausible summary values, but energy-scaling fared best in association analyses. PMID:26485044

  6. Targeted maximum likelihood based causal inference: Part II.

    PubMed

    van der Laan, Mark J

    2010-01-01

    In this article, we provide a template for the practical implementation of the targeted maximum likelihood estimator for analyzing causal effects of multiple time point interventions, for which the methodology was developed and presented in Part I. In addition, the application of this template is demonstrated in two important estimation problems: estimation of the effect of individualized treatment rules based on marginal structural models for treatment rules, and the effect of a baseline treatment on survival in a randomized clinical trial in which the time till event is subject to right censoring. PMID:21731531

  7. Targeted Maximum Likelihood Based Causal Inference: Part II

    PubMed Central

    van der Laan, Mark J.

    2010-01-01

    In this article, we provide a template for the practical implementation of the targeted maximum likelihood estimator for analyzing causal effects of multiple time point interventions, for which the methodology was developed and presented in Part I. In addition, the application of this template is demonstrated in two important estimation problems: estimation of the effect of individualized treatment rules based on marginal structural models for treatment rules, and the effect of a baseline treatment on survival in a randomized clinical trial in which the time till event is subject to right censoring. PMID:21731531

  8. Quantum-state reconstruction by maximizing likelihood and entropy.

    PubMed

    Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk

    2011-07-01

    Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored. PMID:21797584

  9. Spectral estimators of absorbed photosynthetically active radiation in corn canopies

    NASA Technical Reports Server (NTRS)

    Gallo, K. P.; Daughtry, C. S. T.; Bauer, M. E.

    1985-01-01

    Most models of crop growth and yield require an estimate of canopy leaf area index (LAI) or absorption of radiation. Relationships between photosynthetically active radiation (PAR) absorbed by corn canopies and the spectral reflectance of the canopies were investigated. Reflectance factor data were acquired with a Landsat MSS band radiometer. From planting to silking, the three spectrally predicted vegetation indices examined were associated with more than 95 percent of the variability in absorbed PAR. The relationships developed between absorbed PAR and the three indices were evaluated with reflectance factor data acquired from corn canopies planted in 1979 through 1982. Seasonal cumulations of measured LAI and each of the three indices were associated with greater than 50 percent of the variation in final grain yields from the test years. Seasonal cumulations of daily absorbed PAR were associated with up to 73 percent of the variation in final grain yields. Absorbed PAR, cumulated through the growing season, is a better indicator of yield than cumulated leaf area index. Absorbed PAR may be estimated reliably from spectral reflectance data of crop canopies.

  10. Spectral estimators of absorbed photosynthetically active radiation in corn canopies

    NASA Technical Reports Server (NTRS)

    Gallo, K. P.; Daughtry, C. S. T.; Bauer, M. E.

    1984-01-01

    Most models of crop growth and yield require an estimate of canopy leaf area index (LAI) or absorption of radiation. Relationships between photosynthetically active radiation (PAR) absorbed by corn canopies and the spectral reflectance of the canopies were investigated. Reflectance factor data were acquired with a LANDSAT MSS band radiometer. From planting to silking, the three spectrally predicted vegetation indices examined were associated with more than 95% of the variability in absorbed PAR. The relationships developed between absorbed PAR and the three indices were evaluated with reflectance factor data acquired from corn canopies planted in 1979 through 1982. Seasonal cumulations of measured LAI and each of the three indices were associated with greater than 50% of the variation in final grain yields from the test years. Seasonal cumulations of daily absorbed PAR were associated with up to 73% of the variation in final grain yields. Absorbed PAR, cumulated through the growing season, is a better indicator of yield than cumulated leaf area index. Absorbed PAR may be estimated reliably from spectral reflectance data of crop canopies.

  11. Intercepted photosynthetically active radiation estimated by spectral reflectance

    NASA Technical Reports Server (NTRS)

    Hatfield, J. L.; Asrar, G.; Kanemasu, E. T.

    1984-01-01

    Interception of photosynthetically active radiation (PAR) was evaluated relative to greenness and normalized difference (MSS (7-5)/(7+5) for five planting dates of wheat for 1978-79 and 1979-80 at Phoenix, Arizona. Intercepted PAR was calculated from leaf area index and stage of growth. Linear relatinships were found with greeness and normalized difference with separate relatinships describing growth and senescence of the crop. Normalized difference was significantly better than greenness for all planting dates. For the leaf area growth portion of the season the relation between PAR interception and normalized difference was the same over years and planting dates. For the leaf senescence phase the relationships showed more variability due to the lack of data on light interception in sparse and senescing canopies. Normalized difference could be used to estimate PAR interception throughout a growing season.

  12. A composite likelihood approach for spatially correlated survival data

    PubMed Central

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  13. A composite likelihood approach for spatially correlated survival data.

    PubMed

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  14. Tradeoffs in regularized maximum-likelihood image restoration

    NASA Astrophysics Data System (ADS)

    Markham, Joanne; Conchello, Jose-Angel

    1997-04-01

    All algorithms for three-dimensional deconvolution of fluorescence microscopical images have as a common goal the estimation of a specimen function (SF) that is consistent with the recorded image and the process for image formation and recording. To check for consistency, the image of the estimated SF predicted by the imaging operator is compared to the recorded image, and the similarity between them is used as a figure of merit (FOM) in the algorithm to improve the specimen function estimate. Commonly used FOMs include squared differences, maximum entropy, and maximum likelihood (ML). The imaging operator is usually characterized by the point-spread function (PSF), the image of a point source of light, or its Fourier transform, the optical transfer function (OTF). Because the OTF is non-zero only over a small region of the spatial-frequency domain, the inversion of the image formation operator is non-unique and the estimated SF is potentially artifactual. Adding a term to the FOM that penalizes some unwanted behavior of the estimated SF effectively ameliorates potential artifacts, but at the same time biases the estimation process. For example, an intensity penalty avoids overly large pixel values but biases the SF to small pixel values. A roughness penalty avoids rapid pixel to pixel variations but biases the SF to be smooth. In this article we assess the effects of the roughness and intensity penalties on maximum likelihood image estimation.

  15. Likelihood Methods for Testing Group Problem Solving Models with Censored Data.

    ERIC Educational Resources Information Center

    Regal, Ronald R.; Larntz, Kinley

    1978-01-01

    Models relating individual and group problem solving solution times under the condition of limited time (time limit censoring) are presented. Maximum likelihood estimation of parameters and a goodness of fit test are presented. (Author/JKS)

  16. FITTING STATISTICAL DISTRIBUTIONS TO AIR QUALITY DATA BY THE MAXIMUM LIKELIHOOD METHOD

    EPA Science Inventory

    A computer program has been developed for fitting statistical distributions to air pollution data using maximum likelihood estimation. Appropriate uses of this software are discussed and a grouped data example is presented. The program fits the following continuous distributions:...

  17. Out-of-field activity in the estimation of mean lung attenuation coefficient in PET/MR

    NASA Astrophysics Data System (ADS)

    Berker, Yannick; Salomon, André; Kiessling, Fabian; Schulz, Volkmar

    2014-01-01

    In clinical PET/MR, photon attenuation is a source of potentially severe image artifacts. Correction approaches include those based on MR image segmentation, in which image voxels are classified and assigned predefined attenuation coefficients to obtain an attenuation map. In whole-body imaging, however, mean lung attenuation coefficients (LAC) can vary by a factor of 2, and the choice of inappropriate mean LAC can have significant impact on PET quantification. Previously, we proposed a method combining MR image segmentation, tissue classification and Maximum Likelihood reconstruction of Attenuation and Activity (MLAA) to estimate mean LAC values. In this work, we quantify the influence of out-of-field (OOF) accidental coincidences when acquiring data in a single bed position. We therefore carried out GATE simulations of realistic, whole-body activity and attenuation distributions derived from data of three patients. A bias of 15% was found and significantly reduced by removing OOF accidentals from our data, suggesting that OOF accidentals are the major contributor to the bias. We found approximately equal contributions from OOF scatter and OOF randoms, and present results after correction of the bias by rescaling of results. Results using temporal subsets suggest that 30-second acquisitions may be sufficient for estimation mean LAC with less than 5% uncertainty if mean bias can be corrected for.

  18. Model Fit after Pairwise Maximum Likelihood

    PubMed Central

    Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  19. Model Fit after Pairwise Maximum Likelihood.

    PubMed

    Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  20. Neural Mechanisms for Integrating Prior Knowledge and Likelihood in Value-Based Probabilistic Inference

    PubMed Central

    Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.

    2015-01-01

    In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152

  1. Optimal stimulus scheduling for active estimation of evoked brain networks

    NASA Astrophysics Data System (ADS)

    Kafashan, MohammadMehdi; Ching, ShiNung

    2015-12-01

    Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.

  2. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  3. Efficient maximum likelihood parameterization of continuous-time Markov processes

    PubMed Central

    McGibbon, Robert T.; Pande, Vijay S.

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016

  4. Exclusion probabilities and likelihood ratios with applications to mixtures.

    PubMed

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model. PMID:26160753

  5. Exact Likelihood-free Markov Chain Monte Carlo for Elliptically Contoured Distributions

    PubMed Central

    Marjoram, Paul

    2015-01-01

    Recent results in Markov chain Monte Carlo (MCMC) show that a chain based on an unbiased estimator of the likelihood can have a stationary distribution identical to that of a chain based on exact likelihood calculations. In this paper we develop such an estimator for elliptically contoured distributions, a large family of distributions that includes and generalizes the multivariate normal. We then show how this estimator, combined with pseudorandom realizations of an elliptically contoured distribution, can be used to run MCMC in a way that replicates the stationary distribution of a likelihood based chain, but does not require explicit likelihood calculations. Because many elliptically contoured distributions do not have closed form densities, our simulation based approach enables exact MCMC based inference in a range of cases where previously it was impossible. PMID:26167984

  6. An independent sequential maximum likelihood approach to simultaneous track-to-track association and bias removal

    NASA Astrophysics Data System (ADS)

    Song, Qiong; Wang, Yuehuan; Yan, Xiaoyun; Liu, Dang

    2015-12-01

    In this paper we propose an independent sequential maximum likelihood approach to address the joint track-to-track association and bias removal in multi-sensor information fusion systems. First, we enumerate all kinds of association situation following by estimating a bias for each association. Then we calculate the likelihood of each association after bias compensated. Finally we choose the maximum likelihood of all association situations as the association result and the corresponding bias estimation is the registration result. Considering the high false alarm and interference, we adopt the independent sequential association to calculate the likelihood. Simulation results show that our proposed method can give out the right association results and it can estimate the bias precisely simultaneously for small number of targets in multi-sensor fusion system.

  7. Gaussian maximum likelihood and contextual classification algorithms for multicrop classification

    NASA Technical Reports Server (NTRS)

    Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.

    1987-01-01

    The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.

  8. Joint Estimation of Activity and Attenuation in Whole-Body TOF PET/MRI Using Constrained Gaussian Mixture Models.

    PubMed

    Mehranian, Abolfazl; Zaidi, Habib

    2015-09-01

    It has recently been shown that the attenuation map can be estimated from time-of-flight (TOF) PET emission data using joint maximum likelihood reconstruction of attenuation and activity (MLAA). In this work, we propose a novel MRI-guided MLAA algorithm for emission-based attenuation correction in whole-body PET/MR imaging. The algorithm imposes MR spatial and CT statistical constraints on the MLAA estimation of attenuation maps using a constrained Gaussian mixture model (GMM) and a Markov random field smoothness prior. Dixon water and fat MR images were segmented into outside air, lung, fat and soft-tissue classes and an MR low-intensity (unknown) class corresponding to air cavities, cortical bone and susceptibility artifacts. The attenuation coefficients over the unknown class were estimated using a mixture of four Gaussians, and those over the known tissue classes using unimodal Gaussians, parameterized over a patient population. To eliminate misclassification of spongy bones with surrounding tissues, and thus include them in the unknown class, we heuristically suppressed fat in water images and also used a co-registered bone probability map. The proposed MLAA-GMM algorithm was compared with the MLAA algorithms proposed by Rezaei and Salomon using simulation and clinical studies with two different tracer distributions. The results showed that our proposed algorithm outperforms its counterparts in suppressing the cross-talk and scaling problems of activity and attenuation and thus produces PET images of improved quantitative accuracy. It can be concluded that the proposed algorithm effectively exploits the MR information and can pave the way toward accurate emission-based attenuation correction in TOF PET/MRI. PMID:25769148

  9. Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood

    PubMed Central

    Bondell, Howard D.; Stefanski, Leonard A.

    2013-01-01

    Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805

  10. Monte Carlo Simulation to Estimate Likelihood of Direct Lightning Strikes

    NASA Technical Reports Server (NTRS)

    Mata, Carlos; Medelius, Pedro

    2008-01-01

    A software tool has been designed to quantify the lightning exposure at launch sites of the stack at the pads under different configurations. In order to predict lightning strikes to generic structures, this model uses leaders whose origins (in the x-y plane) are obtained from a 2D random, normal distribution.

  11. Implementing Restricted Maximum Likelihood Estimation in Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.

    2013-01-01

    Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects…

  12. Maximum-likelihood approach to strain imaging using ultrasound

    PubMed Central

    Insana, M. F.; Cook, L. T.; Bilgen, M.; Chaturvedi, P.; Zhu, Y.

    2009-01-01

    A maximum-likelihood (ML) strategy for strain estimation is presented as a framework for designing and evaluating bioelasticity imaging systems. Concepts from continuum mechanics, signal analysis, and acoustic scattering are combined to develop a mathematical model of the ultrasonic waveforms used to form strain images. The model includes three-dimensional (3-D) object motion described by affine transformations, Rayleigh scattering from random media, and 3-D system response functions. The likelihood function for these waveforms is derived to express the Fisher information matrix and variance bounds for displacement and strain estimation. The ML estimator is a generalized cross correlator for pre- and post-compression echo waveforms that is realized by waveform warping and filtering prior to cross correlation and peak detection. Experiments involving soft tissuelike media show the ML estimator approaches the Cramér–Rao error bound for small scaling deformations: at 5 MHz and 1.2% compression, the predicted lower bound for displacement errors is 4.4 µm and the measured standard deviation is 5.7 µm. PMID:10738797

  13. Likelihood alarm displays. [for human operator

    NASA Technical Reports Server (NTRS)

    Sorkin, Robert D.; Kantowitz, Barry H.; Kantowitz, Susan C.

    1988-01-01

    In a likelihood alarm display (LAD) information about event likelihood is computed by an automated monitoring system and encoded into an alerting signal for the human operator. Operator performance within a dual-task paradigm was evaluated with two LADs: a color-coded visual alarm and a linguistically coded synthetic speech alarm. The operator's primary task was one of tracking; the secondary task was to monitor a four-element numerical display and determine whether the data arose from a 'signal' or 'no-signal' condition. A simulated 'intelligent' monitoring system alerted the operator to the likelihood of a signal. The results indicated that (1) automated monitoring systems can improve performance on primary and secondary tasks; (2) LADs can improve the allocation of attention among tasks and provide information integrated into operator decisions; and (3) LADs do not necessarily add to the operator's attentional load.

  14. A note on the asymptotic distribution of likelihood ratio tests to test variance components.

    PubMed

    Visscher, Peter M

    2006-08-01

    When using maximum likelihood methods to estimate genetic and environmental components of (co)variance, it is common to test hypotheses using likelihood ratio tests, since such tests have desirable asymptotic properties. In particular, the standard likelihood ratio test statistic is assumed asymptotically to follow a chi2 distribution with degrees of freedom equal to the number of parameters tested. Using the relationship between least squares and maximum likelihood estimators for balanced designs, it is shown why the asymptotic distribution of the likelihood ratio test for variance components does not follow a chi2 distribution with degrees of freedom equal to the number of parameters tested when the null hypothesis is true. Instead, the distribution of the likelihood ratio test is a mixture of chi2 distributions with different degrees of freedom. Implications for testing variance components in twin designs and for quantitative trait loci mapping are discussed. The appropriate distribution of the likelihood ratio test statistic should be used in hypothesis testing and model selection. PMID:16899155

  15. Monocular distance estimation from optic flow during active landing maneuvers.

    PubMed

    van Breugel, Floris; Morgansen, Kristi; Dickinson, Michael H

    2014-06-01

    Vision is arguably the most widely used sensor for position and velocity estimation in animals, and it is increasingly used in robotic systems as well. Many animals use stereopsis and object recognition in order to make a true estimate of distance. For a tiny insect such as a fruit fly or honeybee, however, these methods fall short. Instead, an insect must rely on calculations of optic flow, which can provide a measure of the ratio of velocity to distance, but not either parameter independently. Nevertheless, flies and other insects are adept at landing on a variety of substrates, a behavior that inherently requires some form of distance estimation in order to trigger distance-appropriate motor actions such as deceleration or leg extension. Previous studies have shown that these behaviors are indeed under visual control, raising the question: how does an insect estimate distance solely using optic flow? In this paper we use a nonlinear control theoretic approach to propose a solution for this problem. Our algorithm takes advantage of visually controlled landing trajectories that have been observed in flies and honeybees. Finally, we implement our algorithm, which we term dynamic peering, using a camera mounted to a linear stage to demonstrate its real-world feasibility. PMID:24855045

  16. HALM: A Hybrid Asperity Likelihood Model for Italy

    NASA Astrophysics Data System (ADS)

    Gulia, L.; Wiemer, S.

    2009-04-01

    The Asperity Likelihood Model (ALM), first developed and currently tested for California, hypothesizes that small-scale spatial variations in the b-value of the Gutenberg and Richter relationship play a central role in forecasting future seismicity (Wiemer and Schorlemmer, SRL, 2007). The physical basis of the model is the concept that the local b-value is inversely dependent on applied shear stress. Thus low b-values (b < 0.7) characterize the locked paches of faults -asperities- from which future mainshocks are more likely to be generated, whereas the high b-values (b > 1.1) found for example in creeping section of faults suggest a lower seismic hazard. To test this model in a reproducible and prospective way suitable for the requirements of the CSEP initiative (www.cseptesting.org), the b-value variability is mapped on a grid. First, using the entire dataset above the overall magnitude of completeness, the regional b-value is estimated. This value is then compared to the one locally estimated at each grid-node for a number of radii, we use the local value if its likelihood score, corrected for the degrees of freedom using the Akaike Information Criterion, suggest to do so. We are currently calibrating the ALM model for implementation in the Italian testing region, the first region within the CSEP EU testing Center (eu.cseptesting.org) for which fully prospective tests of earthquake likelihood models will commence in Europe. We are also developing a modified approach, ‘hybrid' between a grid-based and a zoning one: the HALM (Hybrid Asperity Likelihood Model). According to HALM, the Italian territory is divided in three distinct regions depending on the main tectonic elements, combined with knowledge derived from GPS networks, seismic profile interpretation, borehole breakouts and the focal mechanisms of the event. The local b-value variability was thus mapped using three independent overall b-values. We evaluate the performance of the two models in

  17. Closed-loop carrier phase synchronization techniques motivated by likelihood functions

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Hinedi, S.; Simon, M.

    1994-01-01

    This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.

  18. Growing local likelihood network: Emergence of communities

    NASA Astrophysics Data System (ADS)

    Chen, S.; Small, M.

    2015-10-01

    In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a likelihood-based method, where at each step we modify the networks by changing their likelihood to be closer to the expected degree distribution. Combining the local information and the likelihood method, we grow networks that exhibit novel features. We discover that the likelihood method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.

  19. Numerical likelihood analysis of cosmic ray anisotropies

    SciTech Connect

    Carlos Hojvat et al.

    2003-07-02

    A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.

  20. Efficient Bit-to-Symbol Likelihood Mappings

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  1. A pairwise likelihood-based approach for changepoint detection in multivariate time series models

    PubMed Central

    Ma, Ting Fung; Yau, Chun Yip

    2016-01-01

    This paper develops a composite likelihood-based approach for multiple changepoint estimation in multivariate time series. We derive a criterion based on pairwise likelihood and minimum description length for estimating the number and locations of changepoints and for performing model selection in each segment. The number and locations of the changepoints can be consistently estimated under mild conditions and the computation can be conducted efficiently with a pruned dynamic programming algorithm. Simulation studies and real data examples demonstrate the statistical and computational efficiency of the proposed method. PMID:27279666

  2. Using an Active Sensor to Estimate Orchard Grass (Dactylis glomerata L.) Dry Matter Yield and Quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Remote sensing in the form of active sensors could be used to estimate forage biomass on spatial and temporal scales. The objective of this study is to use canopy reflectance measurements from an active remote sensor to compare different vegetation indices as a means of estimating final dry matter y...

  3. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  4. Likelihood-Based Climate Model Evaluation

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Cressie, Noel; Teixeira, Joao

    2012-01-01

    Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the likelihood that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.

  5. An alternative method to measure the likelihood of a financial crisis in an emerging market

    NASA Astrophysics Data System (ADS)

    Özlale, Ümit; Metin-Özcan, Kıvılcım

    2007-07-01

    This paper utilizes an early warning system in order to measure the likelihood of a financial crisis in an emerging market economy. We introduce a methodology, where we can both obtain a likelihood series and analyze the time-varying effects of several macroeconomic variables on this likelihood. Since the issue is analyzed in a non-linear state space framework, the extended Kalman filter emerges as the optimal estimation algorithm. Taking the Turkish economy as our laboratory, the results indicate that both the derived likelihood measure and the estimated time-varying parameters are meaningful and can successfully explain the path that the Turkish economy had followed between 2000 and 2006. The estimated parameters also suggest that overvalued domestic currency, current account deficit and the increase in the default risk increase the likelihood of having an economic crisis in the economy. Overall, the findings in this paper suggest that the estimation methodology introduced in this paper can also be applied to other emerging market economies as well.

  6. Evaluating network models: A likelihood analysis

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Qiang; Zhang, Qian-Ming; Zhou, Tao

    2012-04-01

    Many models are put forward to mimic the evolution of real networked systems. A well-accepted way to judge the validity is to compare the modeling results with real networks subject to several structural features. Even for a specific real network, we cannot fairly evaluate the goodness of different models since there are too many structural features while there is no criterion to select and assign weights on them. Motivated by the studies on link prediction algorithms, we propose a unified method to evaluate the network models via the comparison of the likelihoods of the currently observed network driven by different models, with an assumption that the higher the likelihood is, the more accurate the model is. We test our method on the real Internet at the Autonomous System (AS) level, and the results suggest that the Generalized Linear Preferential (GLP) model outperforms the Tel Aviv Network Generator (Tang), while both two models are better than the Barabási-Albert (BA) and Erdös-Rényi (ER) models. Our method can be further applied in determining the optimal values of parameters that correspond to the maximal likelihood. The experiment indicates that the parameters obtained by our method can better capture the characters of newly added nodes and links in the AS-level Internet than the original methods in the literature.

  7. AN EFFICIENT APPROXIMATION TO THE LIKELIHOOD FOR GRAVITATIONAL WAVE STOCHASTIC BACKGROUND DETECTION USING PULSAR TIMING DATA

    SciTech Connect

    Ellis, J. A.; Siemens, X.; Van Haasteren, R.

    2013-05-20

    Direct detection of gravitational waves by pulsar timing arrays will become feasible over the next few years. In the low frequency regime (10{sup -7} Hz-10{sup -9} Hz), we expect that a superposition of gravitational waves from many sources will manifest itself as an isotropic stochastic gravitational wave background. Currently, a number of techniques exist to detect such a signal; however, many detection methods are computationally challenging. Here we introduce an approximation to the full likelihood function for a pulsar timing array that results in computational savings proportional to the square of the number of pulsars in the array. Through a series of simulations we show that the approximate likelihood function reproduces results obtained from the full likelihood function. We further show, both analytically and through simulations, that, on average, this approximate likelihood function gives unbiased parameter estimates for astrophysically realistic stochastic background amplitudes.

  8. Transfer entropy as a log-likelihood ratio.

    PubMed

    Barnett, Lionel; Bossomaier, Terry

    2012-09-28

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense. PMID:23030125

  9. Transfer Entropy as a Log-Likelihood Ratio

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  10. Estimating Physical Activity in Youth Using a Wrist Accelerometer

    PubMed Central

    Crouter, Scott E.; Flynn, Jennifer I.; Bassett, David R.

    2014-01-01

    PURPOSE The purpose of this study was to develop and validate methods for analyzing wrist accelerometer data in youth. METHODS 181 youth (mean±SD; age, 12.0±1.5 yrs) completed 30-min of supine rest and 8-min each of 2 to 7 structured activities (selected from a list of 25). Receiver Operator Characteristic (ROC) curves and regression analyses were used to develop prediction equations for energy expenditure (child-METs; measured activity VO2 divided by measured resting VO2) and cut-points for computing time spent in sedentary behaviors (SB), light (LPA), moderate (MPA), and vigorous (VPA) physical activity. Both vertical axis (VA) and vector magnitude (VM) counts per 5 seconds were used for this purpose. The validation study included 42 youth (age, 12.6±0.8 yrs) who completed approximately 2-hrs of unstructured PA. During all measurements, activity data were collected using an ActiGraph GT3X or GT3X+, positioned on the dominant wrist. Oxygen consumption was measured using a Cosmed K4b2. Repeated measures ANOVAs were used to compare measured vs predicted child-METs (regression only), and time spent in SB, LPA, MPA, and VPA. RESULTS All ROC cut-points were similar for area under the curve (≥0.825), sensitivity (≥0.756), and specificity (≥0.634) and they significantly underestimated LPA and overestimated VPA (P<0.05). The VA and VM regression models were within ±0.21 child-METs of mean measured child-METs and ±2.5 minutes of measured time spent in SB, LPA, MPA, and VPA, respectively (P>0.05). CONCLUSION Compared to measured values, the VA and VM regression models developed on wrist accelerometer data had insignificant mean bias for child-METs and time spent in SB, LPA, MPA, and VPA; however they had large individual errors. PMID:25207928

  11. Likelihood-free Bayesian computation for structural model calibration: a feasibility study

    NASA Astrophysics Data System (ADS)

    Jin, Seung-Seop; Jung, Hyung-Jo

    2016-04-01

    Finite element (FE) model updating is often used to associate FE models with corresponding existing structures for the condition assessment. FE model updating is an inverse problem and prone to be ill-posed and ill-conditioning when there are many errors and uncertainties in both an FE model and its corresponding measurements. In this case, it is important to quantify these uncertainties properly. Bayesian FE model updating is one of the well-known methods to quantify parameter uncertainty by updating our prior belief on the parameters with the available measurements. In Bayesian inference, likelihood plays a central role in summarizing the overall residuals between model predictions and corresponding measurements. Therefore, likelihood should be carefully chosen to reflect the characteristics of the residuals. It is generally known that very little or no information is available regarding the statistical characteristics of the residuals. In most cases, the likelihood is assumed to be the independent identically distributed Gaussian distribution with the zero mean and constant variance. However, this assumption may cause biased and over/underestimated estimates of parameters, so that the uncertainty quantification and prediction are questionable. To alleviate the potential misuse of the inadequate likelihood, this study introduced approximate Bayesian computation (i.e., likelihood-free Bayesian inference), which relaxes the need for an explicit likelihood by analyzing the behavior similarities between model predictions and measurements. We performed FE model updating based on likelihood-free Markov chain Monte Carlo (MCMC) without using the likelihood. Based on the result of the numerical study, we observed that the likelihood-free Bayesian computation can quantify the updating parameters correctly and its predictive capability for the measurements, not used in calibrated, is also secured.

  12. "Help Wanted, Inquire Within": Estimation. Activities and Thoughts That Emphasize Dealing Sensibly with Numbers through the Processes of Estimation. (Grades 1-6). Title I Elementary Mathematics Program.

    ERIC Educational Resources Information Center

    Gronert, Joie; Marshall, Sally

    Developed for elementary teachers, this activity unit is designed to teach students the importance of estimation in developing quantitative thinking. Nine ways in which estimation is useful to students are listed, and five general guidelines are offered to the teacher for planning estimation activities. Specific guidelines are provided for…

  13. Eliciting information from experts on the likelihood of rapid climate change.

    PubMed

    Arnell, Nigel W; Tompkins, Emma L; Adger, W Neil

    2005-12-01

    The threat of so-called rapid or abrupt climate change has generated considerable public interest because of its potentially significant impacts. The collapse of the North Atlantic Thermohaline Circulation or the West Antarctic Ice Sheet, for example, would have potentially catastrophic effects on temperatures and sea level, respectively. But how likely are such extreme climatic changes? Is it possible actually to estimate likelihoods? This article reviews the societal demand for the likelihoods of rapid or abrupt climate change, and different methods for estimating likelihoods: past experience, model simulation, or through the elicitation of expert judgments. The article describes a survey to estimate the likelihoods of two characterizations of rapid climate change, and explores the issues associated with such surveys and the value of information produced. The surveys were based on key scientists chosen for their expertise in the climate science of abrupt climate change. Most survey respondents ascribed low likelihoods to rapid climate change, due either to the collapse of the Thermohaline Circulation or increased positive feedbacks. In each case one assessment was an order of magnitude higher than the others. We explore a high rate of refusal to participate in this expert survey: many scientists prefer to rely on output from future climate model simulations. PMID:16506972

  14. Comparing Participants' Rating and Compendium Coding to Estimate Physical Activity Intensities

    ERIC Educational Resources Information Center

    Masse, Louise C.; Eason, Karen E.; Tortolero, Susan R.; Kelder, Steven H.

    2005-01-01

    This study assessed agreement between participants' rating (PMET) and compendium coding (CMET) of estimating physical activity intensity in a population of older minority women. As part of the Women on the Move study, 224 women completed a 7-day activity diary and wore an accelerometer for 7 days. All activities recorded were coded using PMET and…

  15. Non-Exercise Estimation of VO[subscript 2]max Using the International Physical Activity Questionnaire

    ERIC Educational Resources Information Center

    Schembre, Susan M.; Riebe, Deborah A.

    2011-01-01

    Non-exercise equations developed from self-reported physical activity can estimate maximal oxygen uptake (VO[subscript 2]max) as well as sub-maximal exercise testing. The International Physical Activity Questionnaire is the most widely used and validated self-report measure of physical activity. This study aimed to develop and test a VO[subscript…

  16. Modelling autoimmune rheumatic disease: a likelihood rationale.

    PubMed

    Ulvestad, E

    2003-07-01

    Immunoglobulins (Igs) and autoantibodies are commonly tested in sera from patients with suspected rheumatic disease. To evaluate the clinical utility of the tests in combination, we investigated sera from 351 patients with autoimmune rheumatic disease (ARD) rheumatoid arthritis (RA), systemic lupus erythematosus (SLE) and Sjögren's syndrome (SS) and 96 patients with nonautoimmune rheumatic disease (NAD) (fibromyalgia, osteoarthritis, etc.). Antinuclear antibodies (ANA), rheumatoid factor (RF), antibodies against DNA and extractable nuclear antigens (anti-ENA), IgG, IgA and IgM were measured for all patients. Logistic regression analysis of test results was used to calculate each patient's probability for belonging to the ARD or NAD group as well as likelihood ratios for disease. Test accuracy was investigated using receiver-operating characteristic (ROC) plots and nonparametric ROC analysis. Neither concentrations of IgG, IgA, IgM, anti-DNA nor anti-ENA gave a significant effect on diagnostic outcome. Probabilities for disease and likelihood ratios calculated by combining RF and ANA performed significantly better at predicting ARD than utilization of the diagnostic tests in isolation (P < 0.001). At a cut-off level of P = 0.73 and likelihood ratio = 1, the logistic model gave a specificity of 93% and a sensitivity of 75% for the differentiation between ARD and NAD. When compared at the same level of specificity, ANA gave a sensitivity of 37% and RF gave a sensitivity of 56.6%. Dichotomizing ANA and RF as positive or negative did not reduce the performance characteristics of the model. Combining results obtained from serological analysis of ANA and RF according to this model will increase the diagnostic utility of the tests in rheumatological practice. PMID:12828565

  17. Intelligence's likelihood and evolutionary time frame

    NASA Astrophysics Data System (ADS)

    Bogonovich, Marc

    2011-04-01

    This paper outlines hypotheses relevant to the evolution of intelligent life and encephalization in the Phanerozoic. If general principles are inferable from patterns of Earth life, implications could be drawn for astrobiology. Many of the outlined hypotheses, relevant data, and associated evolutionary and ecological theory are not frequently cited in astrobiological journals. Thus opportunity exists to evaluate reviewed hypotheses with an astrobiological perspective. A quantitative method is presented for testing one of the reviewed hypotheses (hypothesis i; the diffusion hypothesis). Questions are presented throughout, which illustrate that the question of intelligent life's likelihood can be expressed as multiple, broadly ranging, more tractable questions.

  18. Estimating Toxicity Pathway Activating Doses for High Throughput Chemical Risk Assessments

    EPA Science Inventory

    Estimating a Toxicity Pathway Activating Dose (TPAD) from in vitro assays as an analog to a reference dose (RfD) derived from in vivo toxicity tests would facilitate high throughput risk assessments of thousands of data-poor environmental chemicals. Estimating a TPAD requires def...

  19. Simulated likelihood methods for complex double-platform line transect surveys.

    PubMed

    Schweder, T; Skaug, H J; Langaas, M; Dimakos, X K

    1999-09-01

    The conventional line transect approach of estimating effective search width from the perpendicular distance distribution is inappropriate in certain types of surveys, e.g., when an unknown fraction of the animals on the track line is detected, the animals can be observed only at discrete points in time, there are errors in positional measurements, and covariate heterogeneity exists in detectability. For such situations a hazard probability framework for independent observer surveys is developed. The likelihood of the data, including observed positions of both initial and subsequent observations of animals, is established under the assumption of no measurement errors. To account for measurement errors and possibly other complexities, this likelihood is modified by a function estimated from extensive simulations. This general method of simulated likelihood is explained and the methodology applied to data from a double-platform survey of minke whales in the northeastern Atlantic in 1995. PMID:11314993

  20. Augmented composite likelihood for copula modeling in family studies under biased sampling.

    PubMed

    Zhong, Yujie; Cook, Richard J

    2016-07-01

    The heritability of chronic diseases can be effectively studied by examining the nature and extent of within-family associations in disease onset times. Families are typically accrued through a biased sampling scheme in which affected individuals are identified and sampled along with their relatives who may provide right-censored or current status data on their disease onset times. We develop likelihood and composite likelihood methods for modeling the within-family association in these times through copula models in which dependencies are characterized by Kendall's [Formula: see text] Auxiliary data from independent individuals are exploited by augmentating composite likelihoods to increase precision of marginal parameter estimates and consequently increase efficiency in dependence parameter estimation. An application to a motivating family study in psoriatic arthritis illustrates the method and provides some evidence of excessive paternal transmission of risk. PMID:26819481

  1. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    ); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions. PMID:26263302

  2. Maximum likelihood decoding of Reed Solomon Codes

    SciTech Connect

    Sudan, M.

    1996-12-31

    We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihood decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.

  3. CORA: Emission Line Fitting with Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Ness, Jan-Uwe; Wichmann, Rainer

    2011-12-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  4. CORA - emission line fitting with Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Ness, J.-U.; Wichmann, R.

    2002-07-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  5. Developmental Changes in Children's Understanding of Future Likelihood and Uncertainty

    ERIC Educational Resources Information Center

    Lagattuta, Kristin Hansen; Sayfan, Liat

    2011-01-01

    Two measures assessed 4-10-year-olds' and adults' (N = 201) understanding of future likelihood and uncertainty. In one task, participants sequenced sets of event pictures varying by one physical dimension according to increasing future likelihood. In a separate task, participants rated characters' thoughts about the likelihood of future events,…

  6. Planck 2013 results. XV. CMB power spectra and likelihood

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Leach, S.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Orieux, F.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, ℓ, covering 2 ≤ ℓ ≤ 2500. The main source of uncertainty at ℓ ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ℓs. For ℓ < 50, our likelihood exploits all Planck frequency channels from 30 to 353 GHz, separating the cosmological CMB signal from diffuse Galactic foregrounds through a physically motivated Bayesian component separation technique. At ℓ ≥ 50, we employ a correlated Gaussian likelihood approximation based on a fine-grained set of angular cross-spectra derived from multiple detector combinations between the 100, 143, and 217 GHz frequency channels, marginalising over power spectrum foreground templates. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-ℓ cross-spectra with residuals below a few μK2 at ℓ ≲ 1000, in agreement with estimated calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at ℓ ≲ 1500. One specific example is the

  7. The thermal tolerance of crayfish could be estimated from respiratory electron transport system activity.

    PubMed

    Simčič, Tatjana; Pajk, Franja; Jaklič, Martina; Brancelj, Anton; Vrezec, Al

    2014-04-01

    Whether electron transport system (ETS) activity could be used as an estimator of crayfish thermal tolerance has been investigated experimentally. Food consumption rate, respiration rates in the air and water, the difference between energy consumption and respiration costs at a given temperature ('potential growth scope', PGS), and ETS activity of Orconectes limosus and Pacifastacus leniusculus were determined over a temperature range of 5-30°C. All concerned parameters were found to be temperature dependent. The significant correlation between ETS activity and PGS indicates that they respond similarly to temperature change. The regression analysis of ETS activity as an estimator of thermal tolerance at the mitochondrial level and PGS as an indicator of thermal tolerance at the organismic level showed the shift of optimum temperature ranges of ETS activity to the right for 2° in O. limosus and for 3° in P. leniusculus. Thus, lower estimated temperature optima and temperatures of optimum ranges of PGS compared to ETS activity could indicate higher thermal sensitivity at the organismic level than at a lower level of complexity (i.e. at the mitochondrial level). The response of ETS activity to temperature change, especially at lower and higher temperatures, indicates differences in the characteristics of the ETSs in O. limosus and P. leniusculus. O. limosus is less sensitive to high temperature. The significant correlation between PGS and ETS activity supports our assumption that ETS activity could be used for the rapid estimation of thermal tolerance in crayfish species. PMID:24679968

  8. A maximum likelihood approach to the inverse problem of scatterometry.

    PubMed

    Henn, Mark-Alexander; Gross, Hermann; Scholze, Frank; Wurm, Matthias; Elster, Clemens; Bär, Markus

    2012-06-01

    Scatterometry is frequently used as a non-imaging indirect optical method to reconstruct the critical dimensions (CD) of periodic nanostructures. A particular promising direction is EUV scatterometry with wavelengths in the range of 13 - 14 nm. The conventional approach to determine CDs is the minimization of a least squares function (LSQ). In this paper, we introduce an alternative method based on the maximum likelihood estimation (MLE) that determines the statistical error model parameters directly from measurement data. By using simulation data, we show that the MLE method is able to correct the systematic errors present in LSQ results and improves the accuracy of scatterometry. In a second step, the MLE approach is applied to measurement data from both extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Using MLE removes the systematic disagreement of EUV with other methods such as scanning electron microscopy and gives consistent results for DUV. PMID:22714306

  9. Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography

    PubMed Central

    Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.

    2014-01-01

    Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303

  10. Bayesian Estimation of the Active Concentration and Affinity Constants Using Surface Plasmon Resonance Technology.

    PubMed

    Feng, Feng; Kepler, Thomas B

    2015-01-01

    Surface plasmon resonance (SPR) has previously been employed to measure the active concentration of analyte in addition to the kinetic rate constants in molecular binding reactions. Those approaches, however, have a few restrictions. In this work, a Bayesian approach is developed to determine both active concentration and affinity constants using SPR technology. With the appropriate prior probabilities on the parameters and a derived likelihood function, a Markov Chain Monte Carlo (MCMC) algorithm is applied to compute the posterior probability densities of both the active concentration and kinetic rate constants based on the collected SPR data. Compared with previous approaches, ours exploits information from the duration of the process in its entirety, including both association and dissociation phases, under partial mass transport conditions; do not depend on calibration data; multiple injections of analyte at varying flow rates are not necessary. Finally the method is validated by analyzing both simulated and experimental datasets. A software package implementing our approach is developed with a user-friendly interface and made freely available. PMID:26098764

  11. Bayesian Estimation of the Active Concentration and Affinity Constants Using Surface Plasmon Resonance Technology

    PubMed Central

    Feng, Feng; Kepler, Thomas B.

    2015-01-01

    Surface plasmon resonance (SPR) has previously been employed to measure the active concentration of analyte in addition to the kinetic rate constants in molecular binding reactions. Those approaches, however, have a few restrictions. In this work, a Bayesian approach is developed to determine both active concentration and affinity constants using SPR technology. With the appropriate prior probabilities on the parameters and a derived likelihood function, a Markov Chain Monte Carlo (MCMC) algorithm is applied to compute the posterior probability densities of both the active concentration and kinetic rate constants based on the collected SPR data. Compared with previous approaches, ours exploits information from the duration of the process in its entirety, including both association and dissociation phases, under partial mass transport conditions; do not depend on calibration data; multiple injections of analyte at varying flow rates are not necessary. Finally the method is validated by analyzing both simulated and experimental datasets. A software package implementing our approach is developed with a user-friendly interface and made freely available. PMID:26098764

  12. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    ERIC Educational Resources Information Center

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  13. Likelihood of Suicidality at Varying Levels of Depression Severity: A Re-Analysis of NESARC Data

    ERIC Educational Resources Information Center

    Uebelacker, Lisa A.; Strong, David; Weinstock, Lauren M.; Miller, Ivan W.

    2010-01-01

    Although it is clear that increasing depression severity is associated with more risk for suicidality, less is known about at what levels of depression severity the risk for different suicide symptoms increases. We used item response theory to estimate the likelihood of endorsing suicide symptoms across levels of depression severity in an…

  14. Finding Quantitative Trait Loci Genes with Collaborative Targeted Maximum Likelihood Learning.

    PubMed

    Wang, Hui; Rose, Sherri; van der Laan, Mark J

    2011-07-01

    Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach. PMID:21572586

  15. Finding Quantitative Trait Loci Genes with Collaborative Targeted Maximum Likelihood Learning

    PubMed Central

    Wang, Hui; Rose, Sherri; van der Laan, Mark J.

    2010-01-01

    Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach. PMID:21572586

  16. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  17. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  18. THE MAXIMUM LIKELIHOOD APPROACH TO PROBABILISTIC MODELING OF AIR QUALITY DATA

    EPA Science Inventory

    Software using maximum likelihood estimation to fit six probabilistic models is discussed. The software is designed as a tool for the air pollution researcher to determine what assumptions are valid in the statistical analysis of air pollution data for the purpose of standard set...

  19. Groups, information theory, and Einstein's likelihood principle

    NASA Astrophysics Data System (ADS)

    Sicuro, Gabriele; Tempesta, Piergiulio

    2016-04-01

    We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.

  20. Modelling default and likelihood reasoning as probabilistic

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  1. Groups, information theory, and Einstein's likelihood principle.

    PubMed

    Sicuro, Gabriele; Tempesta, Piergiulio

    2016-04-01

    We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts. PMID:27176234

  2. Improved relocatable over-the-horizon radar detection and tracking using the maximum likelihood adaptive neural system algorithm

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.

    1998-07-01

    An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.

  3. Analysis of neighborhood dynamics of forest ecosystems using likelihood methods and modeling.

    PubMed

    Canham, Charles D; Uriarte, María

    2006-02-01

    Advances in computing power in the past 20 years have led to a proliferation of spatially explicit, individual-based models of population and ecosystem dynamics. In forest ecosystems, the individual-based models encapsulate an emerging theory of "neighborhood" dynamics, in which fine-scale spatial interactions regulate the demography of component tree species. The spatial distribution of component species, in turn, regulates spatial variation in a whole host of community and ecosystem properties, with subsequent feedbacks on component species. The development of these models has been facilitated by development of new methods of analysis of field data, in which critical demographic rates and ecosystem processes are analyzed in terms of the spatial distributions of neighboring trees and physical environmental factors. The analyses are based on likelihood methods and information theory, and they allow a tight linkage between the models and explicit parameterization of the models from field data. Maximum likelihood methods have a long history of use for point and interval estimation in statistics. In contrast, likelihood principles have only more gradually emerged in ecology as the foundation for an alternative to traditional hypothesis testing. The alternative framework stresses the process of identifying and selecting among competing models, or in the simplest case, among competing point estimates of a parameter of a model. There are four general steps involved in a likelihood analysis: (1) model specification, (2) parameter estimation using maximum likelihood methods, (3) model comparison, and (4) model evaluation. Our goal in this paper is to review recent developments in the use of likelihood methods and modeling for the analysis of neighborhood processes in forest ecosystems. We will focus on a single class of processes, seed dispersal and seedling dispersion, because recent papers provide compelling evidence of the potential power of the approach, and illustrate

  4. Calibration of two complex ecosystem models with different likelihood functions

    NASA Astrophysics Data System (ADS)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  5. Improving consumption rate estimates by incorporating wild activity into a bioenergetics model.

    PubMed

    Brodie, Stephanie; Taylor, Matthew D; Smith, James A; Suthers, Iain M; Gray, Charles A; Payne, Nicholas L

    2016-04-01

    Consumption is the basis of metabolic and trophic ecology and is used to assess an animal's trophic impact. The contribution of activity to an animal's energy budget is an important parameter when estimating consumption, yet activity is usually measured in captive animals. Developments in telemetry have allowed the energetic costs of activity to be measured for wild animals; however, wild activity is seldom incorporated into estimates of consumption rates. We calculated the consumption rate of a free-ranging marine predator (yellowtail kingfish, Seriola lalandi) by integrating the energetic cost of free-ranging activity into a bioenergetics model. Accelerometry transmitters were used in conjunction with laboratory respirometry trials to estimate kingfish active metabolic rate in the wild. These field-derived consumption rate estimates were compared with those estimated by two traditional bioenergetics methods. The first method derived routine swimming speed from fish morphology as an index of activity (a "morphometric" method), and the second considered activity as a fixed proportion of standard metabolic rate (a "physiological" method). The mean consumption rate for free-ranging kingfish measured by accelerometry was 152 J·g(-1)·day(-1), which lay between the estimates from the morphometric method (μ = 134 J·g(-1)·day(-1)) and the physiological method (μ = 181 J·g(-1)·day(-1)). Incorporating field-derived activity values resulted in the smallest variance in log-normally distributed consumption rates (σ = 0.31), compared with the morphometric (σ = 0.57) and physiological (σ = 0.78) methods. Incorporating field-derived activity into bioenergetics models probably provided more realistic estimates of consumption rate compared with the traditional methods, which may further our understanding of trophic interactions that underpin ecosystem-based fisheries management. The general methods used to estimate active metabolic rates of free-ranging fish

  6. Modelling complex survey data with population level information: an empirical likelihood approach

    PubMed Central

    Oguz-Alper, M.; Berger, Y. G.

    2016-01-01

    Survey data are often collected with unequal probabilities from a stratified population. In many modelling situations, the parameter of interest is a subset of a set of parameters, with the others treated as nuisance parameters. We show that in this situation the empirical likelihood ratio statistic follows a chi-squared distribution asymptotically, under stratified single and multi-stage unequal probability sampling, with negligible sampling fractions. Simulation studies show that the empirical likelihood confidence interval may achieve better coverages and has more balanced tail error rates than standard approaches involving variance estimation, linearization or resampling. PMID:27279669

  7. A maximum likelihood method for determining the distribution of galaxies in clusters

    NASA Astrophysics Data System (ADS)

    Sarazin, C. L.

    1980-02-01

    A maximum likelihood method is proposed for the analysis of the projected distribution of galaxies in clusters. It has many advantages compared to the standard method; principally, it does not require binning of the galaxy positions, applies to asymmetric clusters, and can simultaneously determine all cluster parameters. A rapid method of solving the maximum likelihood equations is given which also automatically gives error estimates for the parameters. Monte Carlo tests indicate this method applies even for rather sparse clusters. The Godwin-Peach data on the Coma cluster are analyzed; the core sizes derived agree reasonably with those of Bahcall. Some slight evidence of mass segregation is found.

  8. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  9. Active/passive microwave sensor comparison of MIZ-ice concentration estimates. [Marginal Ice Zone (MIZ)

    NASA Technical Reports Server (NTRS)

    Burns, B. A.; Cavalieri, D. J.; Keller, M. R.

    1986-01-01

    Active and passive microwave data collected during the 1984 summer Marginal Ice Zone Experiment in the Fram Strait (MIZEX 84) are used to compare ice concentration estimates derived from synthetic aperture radar (SAR) data to those obtained from passive microwave imagery at several frequencies. The comparison is carried out to evaluate SAR performance against the more established passive microwave technique, and to investigate discrepancies in terms of how ice surface conditions, imaging geometry, and choice of algorithm parameters affect each sensor. Active and passive estimates of ice concentration agree on average to within 12%. Estimates from the multichannel passive microwave data show best agreement with the SAR estimates because the multichannel algorithm effectively accounts for the range in ice floe brightness temperatures observed in the MIZ.

  10. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7

  11. PHYML Online—a web server for fast maximum likelihood-based phylogenetic inference

    PubMed Central

    Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier

    2005-01-01

    PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at . PMID:15980534

  12. Non-exercise estimation of VO2max using the International Physical Activity Questionnaire

    PubMed Central

    Schembre, Susan M.; Riebe, Deborah A.

    2011-01-01

    Non-exercise equations developed from self-reported physical activity can estimate maximal oxygen uptake (VO2max) as well as submaximal exercise testing. The International Physical Activity Questionnaire (IPAQ) is the most widely used and validated self-report measure of physical activity. This study aimed to develop and test a VO2max estimation equation derived from the IPAQ-Short Form (IPAQ-S). College-aged males and females (n = 80) completed the IPAQ-S and performed a maximal exercise test. The estimation equation was created with multivariate regression in a gender-balanced subsample of participants, equally representing five levels of fitness (n = 50) and validated in the remaining participants (n = 30). The resulting equation explained 43% of the variance in measured VO2max (SEE = 5.45 ml·kg-1·min-1). Estimated VO2max for 87% of individuals fell within acceptable limits of error observed with submaximal exercise testing (20% error). The IPAQ-S can be used to successfully estimate VO2max as well as submaximal exercise tests. Development of other population-specific estimation equations is warranted. PMID:21927551

  13. Shielding and activity estimator for template-based nuclide identification methods

    DOEpatents

    Nelson, Karl Einar

    2013-04-09

    According to one embodiment, a method for estimating an activity of one or more radio-nuclides includes receiving one or more templates, the one or more templates corresponding to one or more radio-nuclides which contribute to a probable solution, receiving one or more weighting factors, each weighting factor representing a contribution of one radio-nuclide to the probable solution, computing an effective areal density for each of the one more radio-nuclides, computing an effective atomic number (Z) for each of the one more radio-nuclides, computing an effective metric for each of the one or more radio-nuclides, and computing an estimated activity for each of the one or more radio-nuclides. In other embodiments, computer program products, systems, and other methods are presented for estimating an activity of one or more radio-nuclides.

  14. Personalizing energy expenditure estimation using physiological signals normalization during activities of daily living.

    PubMed

    Altini, Marco; Penders, Julien; Vullers, Ruud; Amft, Oliver

    2014-09-01

    In this paper we propose a generic approach to reduce inter-individual variability of different physiological signals (HR, GSR and respiration) by automatically estimating normalization parameters (e.g. baseline and range). The proposed normalization procedure does not require a dedicated personal calibration during system setup. On the other hand, normalization parameters are estimated at system runtime from sedentary and low intensity activities of daily living (ADLs), such as lying and walking. When combined with activity-specific energy expenditure (EE) models, our normalization procedure improved EE estimation by 15 to 33% in a study group of 18 participants, compared to state of the art activity-specific EE models combining accelerometer and non-normalized physiological signals. PMID:25120177

  15. Estimated economic impact of the levonorgestrel intrauterine system on unintended pregnancy in active duty women.

    PubMed

    Heitmann, Ryan J; Mumford, Sunni L; Hill, Micah J; Armstrong, Alicia Y

    2014-10-01

    Unintended pregnancy is reportedly higher in active duty women; therefore, we sought to estimate the potential impact of the levonorgestrel-containing intrauterine system (LNG-IUS) could have on unintended pregnancy in active duty women. A decision tree model with sensitivity analysis was used to estimate the number of unintentional pregnancies in active duty women which could be prevented. A secondary cost analysis was performed to analyze the direct cost savings to the U.S. Government. The total number of Armed Services members is estimated to be over 1.3 million, with an estimated 208,146 being women. Assuming an age-standardized unintended pregnancy rate of 78 per 1,000 women, 16,235 unintended pregnancies occur each year. Using a combined LNG-IUS failure and expulsion rate of 2.2%, a decrease of 794, 1588, and 3970 unintended pregnancies was estimated to occur with 5%, 10% and 25% usage, respectively. Annual cost savings from LNG-IUS use range from $3,387,107 to $47,352,295 with 5% to 25% intrauterine device usage. One-way sensitivity analysis demonstrated LNG-IUS to be cost-effective when the cost associated with pregnancy and delivery exceeded $11,000. Use of LNG-IUS could result in significant reductions in unintended pregnancy among active duty women, resulting in substantial cost savings to the government health care system. PMID:25269131

  16. Estimating repetitive spatiotemporal patterns from resting-state brain activity data.

    PubMed

    Takeda, Yusuke; Hiroe, Nobuo; Yamashita, Okito; Sato, Masa-Aki

    2016-06-01

    Repetitive spatiotemporal patterns in spontaneous brain activities have been widely examined in non-human studies. These studies have reported that such patterns reflect past experiences embedded in neural circuits. In human magnetoencephalography (MEG) and electroencephalography (EEG) studies, however, spatiotemporal patterns in resting-state brain activities have not been extensively examined. This is because estimating spatiotemporal patterns from resting-state MEG/EEG data is difficult due to their unknown onsets. Here, we propose a method to estimate repetitive spatiotemporal patterns from resting-state brain activity data, including MEG/EEG. Without the information of onsets, the proposed method can estimate several spatiotemporal patterns, even if they are overlapping. We verified the performance of the method by detailed simulation tests. Furthermore, we examined whether the proposed method could estimate the visual evoked magnetic fields (VEFs) without using stimulus onset information. The proposed method successfully detected the stimulus onsets and estimated the VEFs, implying the applicability of this method to real MEG data. The proposed method was applied to resting-state functional magnetic resonance imaging (fMRI) data and MEG data. The results revealed informative spatiotemporal patterns representing consecutive brain activities that dynamically change with time. Using this method, it is possible to reveal discrete events spontaneously occurring in our brains, such as memory retrieval. PMID:26979127

  17. A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution

    PubMed Central

    Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-01-01

    The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840

  18. A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution.

    PubMed

    Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-01-01

    The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840

  19. Estimating the magnitude of near-membrane PDE4 activity in living cells.

    PubMed

    Xin, Wenkuan; Feinstein, Wei P; Britain, Andrea L; Ochoa, Cristhiaan D; Zhu, Bing; Richter, Wito; Leavesley, Silas J; Rich, Thomas C

    2015-09-15

    Recent studies have demonstrated that functionally discrete pools of phosphodiesterase (PDE) activity regulate distinct cellular functions. While the importance of localized pools of enzyme activity has become apparent, few studies have estimated enzyme activity within discrete subcellular compartments. Here we present an approach to estimate near-membrane PDE activity. First, total PDE activity is measured using traditional PDE activity assays. Second, known cAMP concentrations are dialyzed into single cells and the spatial spread of cAMP is monitored using cyclic nucleotide-gated channels. Third, mathematical models are used to estimate the spatial distribution of PDE activity within cells. Using this three-tiered approach, we observed two pharmacologically distinct pools of PDE activity, a rolipram-sensitive pool and an 8-methoxymethyl IBMX (8MM-IBMX)-sensitive pool. We observed that the rolipram-sensitive PDE (PDE4) was primarily responsible for cAMP hydrolysis near the plasma membrane. Finally, we observed that PDE4 was capable of blunting cAMP levels near the plasma membrane even when 100 μM cAMP were introduced into the cell via a patch pipette. Two compartment models predict that PDE activity near the plasma membrane, near cyclic nucleotide-gated channels, was significantly lower than total cellular PDE activity and that a slow spatial spread of cAMP allowed PDE activity to effectively hydrolyze near-membrane cAMP. These results imply that cAMP levels near the plasma membrane are distinct from those in other subcellular compartments; PDE activity is not uniform within cells; and localized pools of AC and PDE activities are responsible for controlling cAMP levels within distinct subcellular compartments. PMID:26201952

  20. Improved activity estimation with MC-JOSEM versus TEW-JOSEM in 111In SPECT

    PubMed Central

    Ouyang, Jinsong; Fakhri, Georges El; Moore, Stephen C.

    2008-01-01

    We have previously developed a fast Monte Carlo (MC)-based joint ordered-subset expectation maximization (JOSEM) iterative reconstruction algorithm, MC-JOSEM. A phantom study was performed to compare quantitative imaging performance of MC-JOSEM with that of a triple-energy-window approach (TEW) in which estimated scatter was also included additively within JOSEM, TEW-JOSEM. We acquired high-count projections of a 5.5 cm3 sphere of 111In at different locations in the water-filled torso phantom; high-count projections were then obtained with 111In only in the liver or only in the soft-tissue background compartment, so that we could generate synthetic projections for spheres surrounded by various activity distributions. MC scatter estimates used by MC-JOSEM were computed once after five iterations of TEW-JOSEM. Images of different combinations of liver∕background and sphere∕background activity concentration ratios were reconstructed by both TEW-JOSEM and MC-JOSEM for 40 iterations. For activity estimation in the sphere, MC-JOSEM always produced better relative bias and relative standard deviation than TEW-JOSEM for each sphere location, iteration number, and activity combination. The average relative bias of activity estimates in the sphere for MC-JOSEM after 40 iterations was −6.9%, versus −15.8% for TEW-JOSEM, while the average relative standard deviation of the sphere activity estimates was 16.1% for MC-JOSEM, versus 27.4% for TEW-JOSEM. Additionally, the average relative bias of activity concentration estimates in the liver and the background for MC-JOSEM after 40 iterations was −3.9%, versus −12.2% for TEW-JOSEM, while the average relative standard deviation of these estimates was 2.5% for MC-JOSEM, versus 3.4% for TEW-JOSEM. MC-JOSEM is a promising approach for quantitative activity estimation in 111In SPECT. PMID:18561679

  1. Estimation of dynamic time activity curves from dynamic cardiac SPECT imaging

    NASA Astrophysics Data System (ADS)

    Hossain, J.; Du, Y.; Links, J.; Rahmim, A.; Karakatsanis, N.; Akhbardeh, A.; Lyons, J.; Frey, E. C.

    2015-04-01

    Whole-heart coronary flow reserve (CFR) may be useful as an early predictor of cardiovascular disease or heart failure. Here we propose a simple method to extract the time-activity curve, an essential component needed for estimating the CFR, for a small number of compartments in the body, such as normal myocardium, blood pool, and ischemic myocardial regions, from SPECT data acquired with conventional cameras using slow rotation. We evaluated the method using a realistic simulation of 99mTc-teboroxime imaging. Uptake of 99mTc-teboroxime based on data from the literature were modeled. Data were simulated using the anatomically-realistic 3D NCAT phantom and an analytic projection code that realistically models attenuation, scatter, and the collimator-detector response. The proposed method was then applied to estimate time activity curves (TACs) for a set of 3D volumes of interest (VOIs) directly from the projections. We evaluated the accuracy and precision of estimated TACs and studied the effects of the presence of perfusion defects that were and were not modeled in the estimation procedure. The method produced good estimates of the myocardial and blood-pool TACS organ VOIs, with average weighted absolute biases of less than 5% for the myocardium and 10% for the blood pool when the true organ boundaries were known and the activity distributions in the organs were uniform. In the presence of unknown perfusion defects, the myocardial TAC was still estimated well (average weighted absolute bias <10%) when the total reduction in myocardial uptake (product of defect extent and severity) was ≤5%. This indicates that the method was robust to modest model mismatch such as the presence of moderate perfusion defects and uptake nonuniformities. With larger defects where the defect VOI was included in the estimation procedure, the estimated normal myocardial and defect TACs were accurate (average weighted absolute bias ≈5% for a defect with 25% extent and 100% severity).

  2. Dimension-independent likelihood-informed MCMC

    DOE PAGESBeta

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  3. Dimension-independent likelihood-informed MCMC

    SciTech Connect

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  4. The maximum likelihood dating of magnetostratigraphic sections

    NASA Astrophysics Data System (ADS)

    Man, Otakar

    2011-04-01

    In general, stratigraphic sections are dated by biostratigraphy and magnetic polarity stratigraphy (MPS) is subsequently used to improve the dating of specific section horizons or to correlate these horizons in different sections of similar age. This paper shows, however, that the identification of a record of a sufficient number of geomagnetic polarity reversals against a reference scale often does not require any complementary information. The deposition and possible subsequent erosion of the section is herein regarded as a stochastic process, whose discrete time increments are independent and normally distributed. This model enables the expression of the time dependence of the magnetic record of section increments in terms of probability. To date samples bracketing the geomagnetic polarity reversal horizons, their levels are combined with various sequences of successive polarity reversals drawn from the reference scale. Each particular combination gives rise to specific constraints on the unknown ages of the primary remanent magnetization of samples. The problem is solved by the constrained maximization of the likelihood function with respect to these ages and parameters of the model, and by subsequent maximization of this function over the set of possible combinations. A statistical test of the significance of this solution is given. The application of this algorithm to various published magnetostratigraphic sections that included nine or more polarity reversals gave satisfactory results. This possible self-sufficiency makes MPS less dependent on other dating techniques.

  5. Dimension-independent likelihood-informed MCMC

    NASA Astrophysics Data System (ADS)

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2016-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  6. Disequilibrium mapping: Composite likelihood for pairwise disequilibrium

    SciTech Connect

    Devlin, B.; Roeder, K.; Risch, N.

    1996-08-15

    The pattern of linkage disequilibrium between a disease locus and a set of marker loci has been shown to be a useful tool for geneticists searching for disease genes. Several methods have been advanced to utilize the pairwise disequilibrium between the disease locus and each of a set of marker loci. However, none of the methods take into account the information from all pairs simultaneously while also modeling the variability in the disequilibrium values due to the evolutionary dynamics of the population. We propose a Composite Likelihood CL model that has these features when the physical distances between the marker loci are known or can be approximated. In this instance, and assuming that there is a single disease mutation, the CL model depends on only three parameters, the recombination fraction between the disease locus and an arbitrary marker locus, {theta}, the age of the mutation, and a variance parameter. When the CL is maximized over a grid of {theta}, it provides a graph that can direct the search for the disease locus. We also show how the CL model can be generalized to account for multiple disease mutations. Evolutionary simulations demonstrate the power of the analyses, as well as their potential weaknesses. Finally, we analyze the data from two mapped diseases, cystic fibrosis and diastrophic dysplasia, finding that the CL method performs well in both cases. 28 refs., 6 figs., 4 tabs.

  7. Cortical connective field estimates from resting state fMRI activity

    PubMed Central

    Gravel, Nicolás; Harvey, Ben; Nordhjem, Barbara; Haak, Koen V.; Dumoulin, Serge O.; Renken, Remco; Ćurčić-Blake, Branislava; Cornelissen, Frans W.

    2014-01-01

    One way to study connectivity in visual cortical areas is by examining spontaneous neural activity. In the absence of visual input, such activity remains shaped by the underlying neural architecture and, presumably, may still reflect visuotopic organization. Here, we applied population connective field (CF) modeling to estimate the spatial profile of functional connectivity in the early visual cortex during resting state functional magnetic resonance imaging (RS-fMRI). This model-based analysis estimates the spatial integration between blood-oxygen level dependent (BOLD) signals in distinct cortical visual field maps using fMRI. Just as population receptive field (pRF) mapping predicts the collective neural activity in a voxel as a function of response selectivity to stimulus position in visual space, CF modeling predicts the activity of voxels in one visual area as a function of the aggregate activity in voxels in another visual area. In combination with pRF mapping, CF locations on the cortical surface can be interpreted in visual space, thus enabling reconstruction of visuotopic maps from resting state data. We demonstrate that V1 ➤ V2 and V1 ➤ V3 CF maps estimated from resting state fMRI data show visuotopic organization. Therefore, we conclude that—despite some variability in CF estimates between RS scans—neural properties such as CF maps and CF size can be derived from resting state data. PMID:25400541

  8. The dud-alternative effect in likelihood judgment.

    PubMed

    Windschitl, Paul D; Chambers, John R

    2004-01-01

    The judged likelihood of a focal outcome should generally decrease as the list of alternative possibilities increases. For example, the likelihood that a runner will win a race goes down when 2 new entries are added to the field. However, 6 experiments demonstrate that the presence of implausible alternatives (duds) often increases the judged likelihood of a focal outcome. This dud-alternative effect was detected for judgments involving uncertainty about trivia facts and stochastic events. Nonnumeric likelihood measures and betting measures reliably detected the effect, but numeric likelihood measures did not. Time pressure increased the magnitude of the effect. The results were consistent with a contrast-effect account: The inclusion of duds increases the perceived strength of the evidence for the focal outcome, thereby affecting its judged likelihood. PMID:14736307

  9. Generalized Cramér-Rao Bound for Joint Estimation of Target Position and Velocity for Active and Passive Radar Networks

    NASA Astrophysics Data System (ADS)

    He, Qian; Hu, Jianbin; Blum, Rick S.; Wu, Yonggang

    2016-04-01

    In this paper, we derive the Cramer-Rao bound (CRB) for joint target position and velocity estimation using an active or passive distributed radar network under more general, and practically occurring, conditions than assumed in previous work. In particular, the presented results allow nonorthogonal signals, spatially dependent Gaussian reflection coefficients, and spatially dependent Gaussian clutter-plus-noise. These bounds allow designers to compare the performance of their developed approaches, which are deemed to be of acceptable complexity, to the best achievable performance. If their developed approaches lead to performance close to the bounds, these developed approaches can be deemed "good enough". A particular recent study where algorithms have been developed for a practical radar application which must involve nonorthogonal signals, for which the best performance is unknown, is a great example. The presented results in our paper do not make any assumptions about the approximate location of the target being known from previous target detection signal processing. In addition, for situations in which we do not know some parameters accurately, we also derive the mismatched CRB. Numerical investigations of the mean squared error of the maximum likelihood estimation are employed to support the validity of the CRBs. In order to demonstrate the utility of the provided results to a topic of great current interest, the numerical results focus on a passive radar system using the Global System for Mobile communication (GSM) cellar system.

  10. Video-Quality Estimation Based on Reduced-Reference Model Employing Activity-Difference

    NASA Astrophysics Data System (ADS)

    Yamada, Toru; Miyamoto, Yoshihiro; Senda, Yuzo; Serizawa, Masahiro

    This paper presents a Reduced-reference based video-quality estimation method suitable for individual end-user quality monitoring of IPTV services. With the proposed method, the activity values for individual given-size pixel blocks of an original video are transmitted to end-user terminals. At the end-user terminals, the video quality of a received video is estimated on the basis of the activity-difference between the original video and the received video. Psychovisual weightings and video-quality score adjustments for fatal degradations are applied to improve estimation accuracy. In addition, low-bit-rate transmission is achieved by using temporal sub-sampling and by transmitting only the lower six bits of each activity value. The proposed method achieves accurate video quality estimation using only low-bit-rate original video information (15kbps for SDTV). The correlation coefficient between actual subjective video quality and estimated quality is 0.901 with 15kbps side information. The proposed method does not need computationally demanding spatial and gain-and-offset registrations. Therefore, it is suitable for real-time video-quality monitoring in IPTV services.

  11. A likelihood reformulation method in non-normal random effects models.

    PubMed

    Liu, Lei; Yu, Zhangsheng

    2008-07-20

    In this paper, we propose a practical computational method to obtain the maximum likelihood estimates (MLE) for mixed models with non-normal random effects. By simply multiplying and dividing a standard normal density, we reformulate the likelihood conditional on the non-normal random effects to that conditional on the normal random effects. Gaussian quadrature technique, conveniently implemented in SAS Proc NLMIXED, can then be used to carry out the estimation process. Our method substantially reduces computational time, while yielding similar estimates to the probability integral transformation method (J. Comput. Graphical Stat. 2006; 15:39-57). Furthermore, our method can be applied to more general situations, e.g. finite mixture random effects or correlated random effects from Clayton copula. Simulations and applications are presented to illustrate our method. PMID:18038445

  12. Estimating Am-241 activity in the body: comparison of direct measurements and radiochemical analyses

    SciTech Connect

    Lynch, Timothy P.; Tolmachev, Sergei Y.; James, Anthony C.

    2009-06-01

    The assessment of dose and ultimately the health risk from intakes of radioactive materials begins with estimating the amount actually taken into the body. An accurate estimate provides the basis to best assess the distribution in the body, the resulting dose, and ultimately the health risk. This study continues the time-honored practice of evaluating the accuracy of results obtained using in vivo measurement methods and techniques. Results from the radiochemical analyses of the 241Am activity content of tissues and organs from four donors to the United States Transuranium and Uranium Registries were compared to the results from direct measurements of radioactive material in the body performed in vivo and post mortem. Two were whole body donations and two were partial body donations The skeleton was the organ with the highest deposition of 241Am activity in all four cases. The activities ranged from 30 Bq to 300 Bq. The skeletal estimates obtained from measurements over the forehead were within 20% of the radiochemistry results in three cases and differed by 78% in one case. The 241Am lung activity estimates ranged from 1 Bq to 30 Bq in the four cases. The results from the direct measurements were within 40% of the radiochemistry results in 3 cases and within a factor of 3 for the other case. The direct measurement estimates of liver activity ranged from 2 Bq to 60 Bq and were generally lower than the radiochemistry results. The results from this study suggest that the measurement methods and calibration techniques used at the In Vivo Radiobioassay and Research Facility to quantify the activity in the lungs, skeleton and liver are reasonable under the most challenging conditions where there is 241Am activity in multiple organs. These methods and techniques are comparable to those used at other Department of Energy sites. This suggests that the current in vivo methods and calibration techniques provide reasonable estimates of radioactive material in the body. Not

  13. Consistent estimation of complete neuronal connectivity in large neuronal populations using sparse "shotgun" neuronal activity sampling.

    PubMed

    Mishchenko, Yuriy

    2016-10-01

    We investigate the properties of recently proposed "shotgun" sampling approach for the common inputs problem in the functional estimation of neuronal connectivity. We study the asymptotic correctness, the speed of convergence, and the data size requirements of such an approach. We show that the shotgun approach can be expected to allow the inference of complete connectivity matrix in large neuronal populations under some rather general conditions. However, we find that the posterior error of the shotgun connectivity estimator grows quickly with the size of unobserved neuronal populations, the square of average connectivity strength, and the square of observation sparseness. This implies that the shotgun connectivity estimation will require significantly larger amounts of neuronal activity data whenever the number of neurons in observed neuronal populations remains small. We present a numerical approach for solving the shotgun estimation problem in general settings and use it to demonstrate the shotgun connectivity inference in the examples of simulated synfire and weakly coupled cortical neuronal networks. PMID:27515518

  14. Estimating the Passage of Minutes: Deviant Oscillatory Frontal Activity in Medicated and Un-Medicated ADHD

    PubMed Central

    Wilson, Tony W.; Heinrichs-Graham, Elizabeth; White, Matthew L.; Knott, Nichole L.; Wetzel, Martin W.

    2015-01-01

    Objective Attention-deficit/hyperactivity disorder (ADHD) is a common and extensively treated psychiatric disorder in children, which often persists into adulthood. The core diagnostic symptoms include inappropriate levels of hyperactivity, impulsivity, and/or pervasive inattention. Another crucial aspect of the disorder involves aberrations in temporal perception, which have been well documented in behavioral studies and recently have been the focus of neuroimaging studies. These fMRI studies have shown reduced activation in anterior cingulate and prefrontal cortices in ADHD using a time-interval discrimination task, whereby participants distinguish intervals differing by only hundreds of milliseconds. Method We utilized magnetoencephalography (MEG) to evaluate the cortical network serving temporal perception during a continuous, long-duration (minutes) time estimation experiment. Briefly, medicated and un-medicated persons with ADHD, and a control group responded each time they estimated 60 s had elapsed for an undisclosed amount of time in two separate MEG sessions. All MEG data were transformed into regional source activity, and subjected to spectral analyses to derive amplitude estimates of gamma-band activity. Results Compared to controls, un-medicated patients were less accurate time estimators and had weaker gamma activity in the anterior cingulate, supplementary motor area, and left prefrontal cortices. Following medication, these patients exhibited small but significant increases in gamma across these same neural regions and significant improvements in time estimation accuracy, which correlated with the gamma activity increases. Conclusions We found deficient gamma activity in brain areas known to be crucial for timing functions, which may underlie the day-to-day abnormalities in time perception that are common in ADHD. PMID:24040925

  15. Estimating Physical Activity Energy Expenditure with the Kinect Sensor in an Exergaming Environment

    PubMed Central

    Nathan, David; Huynh, Du Q.; Rubenson, Jonas; Rosenberg, Michael

    2015-01-01

    Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming

  16. Estimating physical activity energy expenditure with the Kinect Sensor in an exergaming environment.

    PubMed

    Nathan, David; Huynh, Du Q; Rubenson, Jonas; Rosenberg, Michael

    2015-01-01

    Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming

  17. Maximum Likelihood-Based Iterated Divided Difference Filter for Nonlinear Systems from Discrete Noisy Measurements

    PubMed Central

    Wang, Changyuan; Zhang, Jing; Mu, Jing

    2012-01-01

    A new filter named the maximum likelihood-based iterated divided difference filter (MLIDDF) is developed to improve the low state estimation accuracy of nonlinear state estimation due to large initial estimation errors and nonlinearity of measurement equations. The MLIDDF algorithm is derivative-free and implemented only by calculating the functional evaluations. The MLIDDF algorithm involves the use of the iteration measurement update and the current measurement, and the iteration termination criterion based on maximum likelihood is introduced in the measurement update step, so the MLIDDF is guaranteed to produce a sequence estimate that moves up the maximum likelihood surface. In a simulation, its performance is compared against that of the unscented Kalman filter (UKF), divided difference filter (DDF), iterated unscented Kalman filter (IUKF) and iterated divided difference filter (IDDF) both using a traditional iteration strategy. Simulation results demonstrate that the accumulated mean-square root error for the MLIDDF algorithm in position is reduced by 63% compared to that of UKF and DDF algorithms, and by 7% compared to that of IUKF and IDDF algorithms. The new algorithm thus has better state estimation accuracy and a fast convergence rate. PMID:23012525

  18. Scanning linear estimation: improvements over region of interest (ROI) methods

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.

    2013-03-01

    In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.

  19. Dorsomedial prefrontal cortex activity predicts the accuracy in estimating others' preferences

    PubMed Central

    Kang, Pyungwon; Lee, Jongbin; Sul, Sunhae; Kim, Hackjin

    2013-01-01

    The ability to accurately estimate another person's preferences is crucial for a successful social life. In daily interactions, we often do this on the basis of minimal information. The aims of the present study were (a) to examine whether people can accurately judge others based only on a brief exposure to their appearances, and (b) to reveal the underlying neural mechanisms with functional magnetic resonance imaging (fMRI). Participants were asked to make guesses about unfamiliar target individuals' preferences for various items after looking at their faces for 3 s. The behavioral results showed that participants estimated others' preferences above chance level. The fMRI data revealed that higher accuracy in preference estimation was associated with greater activity in the dorsomedial prefrontal cortex (DMPFC) when participants were guessing the targets' preferences relative to thinking about their own preferences. These findings suggest that accurate estimations of others' preferences may require increased activity in the DMPFC. A functional connectivity analysis revealed that higher accuracy in preference estimation was related to increased functional connectivity between the DMPFC and the brain regions that are known to be involved in theory of mind processing, such as the temporoparietal junction (TPJ) and the posterior cingulate cortex (PCC)/precuneus, during correct vs. incorrect guessing trials. On the contrary, the tendency to refer to self-preferences when estimating others' preference was related to greater activity in the ventromedial prefrontal cortex. These findings imply that the DMPFC may be a core region in estimating the preferences of others and that higher accuracy may require stronger communication between the DMPFC and the TPJ and PCC/precuneus, part of a neural network known to be engaged in mentalizing. PMID:24324419

  20. Assimilation of active and passive microwave observations for improved estimates of soil moisture and crop growth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An Ensemble Kalman Filter-based data assimilation framework that links a crop growth model with active and passive (AP) microwave models was developed to improve estimates of soil moisture (SM) and vegetation biomass over a growing season of soybean. Complementarities in AP observations were incorpo...

  1. Estimates of genetic parameters among scale activity scores, growth, and fatness in pigs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genetic parameters for scale activity score were estimated from generations 5, 6, and 7 of a randomly selected, composite population composed of Duroc, Large White, and two sources of Landrace (n = 2,186). At approximately 156 d of age, pigs were weighed (WT) and ultrasound backfat measurements (BF1...

  2. 78 FR 46597 - Agency Information Collection Activities: Comment Request for the Production Estimate (2 Forms)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-01

    ....S. Geological Survey Agency Information Collection Activities: Comment Request for the Production...). SUPPLEMENTARY INFORMATION: I. Abstract This collection is needed to provide data on mineral production for... OMB Control Number: 1028-0065. Form Numbers: 9-4042-A and 9-4124-A. Title: Production Estimate....

  3. Estimating Activity and Sedentary Behavior From an Accelerometer on the Hip or Wrist

    PubMed Central

    Rosenberger, Mary E.; Haskell, William L.; Albinali, Fahd; Mota, Selene; Nawyn, Jason; Intille, Stephen

    2013-01-01

    Previously the National Health and Examination Survey measured physical activity with an accelerometer worn on the hip for seven days, but recently changed the location of the monitor to the wrist. PURPOSE This study compared estimates of physical activity intensity and type with an accelerometer on the hip versus the wrist. METHODS Healthy adults (n=37) wore triaxial accelerometers (Wockets) on the hip and dominant wrist along with a portable metabolic unit to measure energy expenditure during 20 activities. Motion summary counts were created, then receiver operating characteristic (ROC) curves were used to determine sedentary and activity intensity thresholds. Ambulatory activities were separated from other activities using the coefficient of variation (CV) of the counts. Mixed model predictions were used to estimate activity intensity. RESULTS The ROC for determining sedentary behavior had greater sensitivity and specificity (71% and 96%) at the hip than the wrist (53% and 76%), as did the ROC for moderate to vigorous physical activity on the hip (70% and 83%) versus the wrist (30% and 69%). The ROC for the CV associated with ambulation had a larger AUC at the hip compared to the wrist (0.83 and 0.74). The prediction model for activity energy expenditure (AEE) resulted in an average difference of 0.55 (+/− 0.55) METs on the hip and 0.82 (+/− 0.93) METs on the wrist. CONCLUSIONS Methods frequently used for estimating AEE and identifying activity intensity thresholds from an accelerometer on the hip generally do better than similar data from an accelerometer on the wrist. Accurately identifying sedentary behavior from a lack of wrist motion presents significant challenges. PMID:23247702

  4. Improving global fire carbon emissions estimates by combining moderate resolution burned area and active fire observations

    NASA Astrophysics Data System (ADS)

    Randerson, J. T.; Chen, Y.; Giglio, L.; Rogers, B. M.; van der Werf, G.

    2011-12-01

    In several important biomes, including croplands and tropical forests, many small fires exist that have sizes that are well below the detection limit for the current generation of burned area products derived from moderate resolution spectroradiometers. These fires likely have important effects on greenhouse gas and aerosol emissions and regional air quality. Here we developed an approach for combining 1km thermal anomalies (active fires; MOD14A2) and 500m burned area observations (MCD64A1) to estimate the prevalence of these fires and their likely contribution to burned area and carbon emissions. We first estimated active fires within and outside of 500m burn scars in 0.5 degree grid cells during 2001-2010 for which MCD64A1 burned area observations were available. For these two sets of active fires we then examined mean fire radiative power (FRP) and changes in enhanced vegetation index (EVI) derived from 16-day intervals immediately before and after each active fire observation. To estimate the burned area associated with sub-500m fires, we first applied burned area to active fire ratios derived solely from within burned area perimeters to active fires outside of burn perimeters. In a second step, we further modified our sub-500m burned area estimates using EVI changes from active fires outside and within of burned areas (after subtracting EVI changes derived from control regions). We found that in northern and southern Africa savanna regions and in Central and South America dry forest regions, the number of active fires outside of MCD64A1 burned areas increased considerably towards the end of the fire season. EVI changes for active fires outside of burn perimeters were, on average, considerably smaller than EVI changes associated with active fires inside burn scars, providing evidence for burn scars that were substantially smaller than the 25 ha area of a single 500m pixel. FRP estimates also were lower for active fires outside of burn perimeters. In our

  5. The Atacama Cosmology Telescope: likelihood for small-scale CMB data

    SciTech Connect

    Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G.E.; Halpern, M.; Battaglia, N.; Battistelli, E.S.; Bond, J.R.; Hajian, A.; Hincks, A.D.; Das, S.; Devlin, M.J.; Dünner, R.; Fowler, J.W.; Irwin, K.D.; Gralla, M.; Hasselfield, M.; Hlozek, R.; Hughes, J.P.; Kosowsky, A.; and others

    2013-07-01

    The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with χ{sup 2}/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation.

  6. The Atacama Cosmology Telescope: Likelihood for Small-Scale CMB Data

    NASA Technical Reports Server (NTRS)

    Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G. E.; Battaglia, N.; Battistelli, E. S.; Bond, J. R.; Das, S.; Devlin, M. J.; Dunner, R.; Fowler, J. W.; Gralla, M.; Hajian, A.; Halpern, M.; Hasselfield, M.; Hincks, A. D.; Hlozek, R.; Hughes, J. P.; Irwin, K. D.; Kosowsky, A.; Louis, T.; Marriage, T. A.; Marsden, D.; Menanteau, F.; Niemack, M.

    2013-01-01

    The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with ?2/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation

  7. How number line estimation skills relate to neural activations in single digit subtraction problems.

    PubMed

    Berteletti, I; Man, G; Booth, J R

    2015-02-15

    The Number Line (NL) task requires judging the relative numerical magnitude of a number and estimating its value spatially on a continuous line. Children's skill on this task has been shown to correlate with and predict future mathematical competence. Neurofunctionally, this task has been shown to rely on brain regions involved in numerical processing. However, there is no direct evidence that performance on the NL task is related to brain areas recruited during arithmetical processing and that these areas are domain-specific to numerical processing. In this study, we test whether 8- to 14-year-old's behavioral performance on the NL task is related to fMRI activation during small and large single-digit subtraction problems. Domain-specific areas for numerical processing were independently localized through a numerosity judgment task. Results show a direct relation between NL estimation performance and the amount of the activation in key areas for arithmetical processing. Better NL estimators showed a larger problem size effect than poorer NL estimators in numerical magnitude (i.e., intraparietal sulcus) and visuospatial areas (i.e., posterior superior parietal lobules), marked by less activation for small problems. In addition, the direction of the activation with problem size within the IPS was associated with differences in accuracies for small subtraction problems. This study is the first to show that performance in the NL task, i.e. estimating the spatial position of a number on an interval, correlates with brain activity observed during single-digit subtraction problem in regions thought to be involved in numerical magnitude and spatial processes. PMID:25497398

  8. Lesion quantification in oncological positron emission tomography: a maximum likelihood partial volume correction strategy.

    PubMed

    De Bernardi, Elisabetta; Faggiano, Elena; Zito, Felicia; Gerundini, Paolo; Baselli, Giuseppe

    2009-07-01

    A maximum likelihood (ML) partial volume effect correction (PVEC) strategy for the quantification of uptake and volume of oncological lesions in 18F-FDG positron emission tomography is proposed. The algorithm is based on the application of ML reconstruction on volumetric regional basis functions initially defined on a smooth standard clinical image and iteratively updated in terms of their activity and volume. The volume of interest (VOI) containing a previously detected region is segmented by a k-means algorithm in three regions: A central region surrounded by a partial volume region and a spill-out region. All volume outside the VOI (background with all other structures) is handled as a unique basis function and therefore "frozen" in the reconstruction process except for a gain coefficient. The coefficients of the regional basis functions are iteratively estimated with an attenuation-weighted ordered subset expectation maximization (AWOSEM) algorithm in which a 3D, anisotropic, space variant model of point spread function (PSF) is included for resolution recovery. The reconstruction-segmentation process is iterated until convergence; at each iteration, segmentation is performed on the reconstructed image blurred by the system PSF in order to update the partial volume and spill-out regions. The developed PVEC strategy was tested on sphere phantom studies with activity contrasts of 7.5 and 4 and compared to a conventional recovery coefficient method. Improved volume and activity estimates were obtained with low computational costs, thanks to blur recovery and to a better local approximation to ML convergence. PMID:19673203

  9. Robust system state estimation for active suspension control in high-speed tilting trains

    NASA Astrophysics Data System (ADS)

    Zhou, Ronghui; Zolotas, Argyrios; Goodall, Roger

    2014-05-01

    The interaction between the railway vehicle body roll and lateral dynamics substantially influences the tilting system performance in high-speed tilting trains, which results in a potential poor ride comfort and high risk of motion sickness. Integrating active lateral secondary suspension into the tilting control system is one of the solutions to provide a remedy to roll-lateral interaction. It improves the design trade-off for the local tilt control (based only upon local vehicle measurements) between straight track ride comfort and curving performance. Advanced system state estimation technology can be applied to further enhance the system performance, i.e. by using the estimated vehicle body lateral acceleration (relative to the track) and true cant deficiency in the configuration of the tilt and lateral active suspension controllers, thus to further attenuate the system dynamics coupling. Robust H∞ filtering is investigated in this paper aiming to offer a robust estimation (i.e. estimation in the presence of uncertainty) for the required variables, In particular, it can minimise the maximum estimation error and thus be more robust to system parametric uncertainty. Simulation results illustrate the effectiveness of the proposed schemes.

  10. Active galactic nucleus black hole mass estimates in the era of time domain astronomy

    SciTech Connect

    Kelly, Brandon C.; Treu, Tommaso; Pancoast, Anna; Malkan, Matthew; Woo, Jong-Hak

    2013-12-20

    We investigate the dependence of the normalization of the high-frequency part of the X-ray and optical power spectral densities (PSDs) on black hole mass for a sample of 39 active galactic nuclei (AGNs) with black hole masses estimated from reverberation mapping or dynamical modeling. We obtained new Swift observations of PG 1426+015, which has the largest estimated black hole mass of the AGNs in our sample. We develop a novel statistical method to estimate the PSD from a light curve of photon counts with arbitrary sampling, eliminating the need to bin a light curve to achieve Gaussian statistics, and we use this technique to estimate the X-ray variability parameters for the faint AGNs in our sample. We find that the normalization of the high-frequency X-ray PSD is inversely proportional to black hole mass. We discuss how to use this scaling relationship to obtain black hole mass estimates from the short timescale X-ray variability amplitude with precision ∼0.38 dex. The amplitude of optical variability on timescales of days is also anticorrelated with black hole mass, but with larger scatter. Instead, the optical variability amplitude exhibits the strongest anticorrelation with luminosity. We conclude with a discussion of the implications of our results for estimating black hole mass from the amplitude of AGN variability.

  11. Using Photosynthetically Active Radiation (PAR) Observations to Estimate Potential Evaporation with Combination Equations

    NASA Astrophysics Data System (ADS)

    Kim, J.; Freyberg, D. L.

    2011-12-01

    Estimating potential evaporation with combination equations typically depends on observations of solar radiation. In situations where only photosynthetically active radiation (PAR) observations are available, a conversion model is required. We use coincident observations of solar radiation and PAR to build a conversion model for the Santa Cruz Mountains region of California, USA. The model takes advantage of the strong seasonality in cloud cover and albedo, using two seasonal sub-models to improve performance. We examine the uncertainty induced by model error in predictions of potential evaporation and reference crop evaporation using locally calibrated combination equations, and compare with direct observations of pan evaporation and inferred estimates of lake evaporation.

  12. A Network-Based Multi-Target Computational Estimation Scheme for Anticoagulant Activities of Compounds

    PubMed Central

    Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-01-01

    Background Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. Methodology We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. Conclusions This article proposes a network-based multi-target computational estimation method for

  13. A Maximum Likelihood Approach to Correlational Outlier Identification.

    ERIC Educational Resources Information Center

    Bacon, Donald R.

    1995-01-01

    A maximum likelihood approach to correlational outlier identification is introduced and compared to the Mahalanobis D squared and Comrey D statistics through Monte Carlo simulation. Identification performance depends on the nature of correlational outliers and the measure used, but the maximum likelihood approach is the most robust performance…

  14. The Dud-Alternative Effect in Likelihood Judgment

    ERIC Educational Resources Information Center

    Windschitl, Paul D.; Chambers, John R.

    2004-01-01

    The judged likelihood of a focal outcome should generally decrease as the list of alternative possibilities increases. For example, the likelihood that a runner will win a race goes down when 2 new entries are added to the field. However, 6 experiments demonstrate that the presence of implausible alternatives (duds) often increases the judged…

  15. Effect of Volume-of-Interest Misregistration on Quantitative Planar Activity and Dose Estimation

    PubMed Central

    Song, N.; He, B.; Frey, E. C.

    2010-01-01

    In targeted radionuclide therapy (TRT), dose estimation is essential for treatment planning and tumor dose response studies. Dose estimates are typically based on a time series of whole body conjugate view planar or SPECT scans of the patient acquired after administration of a planning dose. Quantifying the activity in the organs from these studies is an essential part of dose estimation. The Quantitative Planar (QPlanar) processing method involves accurate compensation for image degrading factors and correction for organ and background overlap via the combination of computational models of the image formation process and 3D volumes of interest defining the organs to be quantified. When the organ VOIs are accurately defined, the method intrinsically compensates for attenuation, scatter, and partial volume effects, as well as overlap with other organs and the background. However, alignment between the 3D organ volume of interest (VOIs) used in QPlanar processing and the true organ projections in the planar images is required. The goal of this research was to study the effects of VOI misregistration on the accuracy and precision of organ activity estimates obtained using the QPlanar method. In this work, we modeled the degree of residual misregistration that would be expected after an automated registration procedure by randomly misaligning 3D SPECT/CT images, from which the VOI information was derived, and planar images. Mutual information based image registration was used to align the realistic simulated 3D SPECT images with the 2D planar images. The residual image misregistration was used to simulate realistic levels of misregistration and allow investigation of the effects of misregistration on the accuracy and precision of the QPlanar method. We observed that accurate registration is especially important for small organs or ones with low activity concentrations compared to neighboring organs. In addition, residual misregistration gave rise to a loss of precision

  16. The Use of Dynamic Stochastic Social Behavior Models to Produce Likelihood Functions for Risk Modeling of Proliferation and Terrorist Attacks

    SciTech Connect

    Young, Jonathan; Thompson, Sandra E.; Brothers, Alan J.; Whitney, Paul D.; Coles, Garill A.; Henderson, Cindy L.; Wolf, Katherine E.; Hoopes, Bonnie L.

    2008-12-01

    The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies – a critical challenge is linking across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material.

  17. Crisis Checklists May Substantially Reduce the Likelihood of Critical Missed Steps in the Operating Room

    MedlinePlus

    ... the likelihood of critical missed steps in the operating room Previous Page Next Page Table of Contents Research Activities, July 2013 Care of Oklahoma tornado victims helped by AHRQ-supported information system From the Director Certain therapies and medications improve ...

  18. Quantifying the Establishment Likelihood of Invasive Alien Species Introductions Through Ports with Application to Honeybees in Australia.

    PubMed

    Heersink, Daniel K; Caley, Peter; Paini, Dean R; Barry, Simon C

    2016-05-01

    The cost of an uncontrolled incursion of invasive alien species (IAS) arising from undetected entry through ports can be substantial, and knowledge of port-specific risks is needed to help allocate limited surveillance resources. Quantifying the establishment likelihood of such an incursion requires quantifying the ability of a species to enter, establish, and spread. Estimation of the approach rate of IAS into ports provides a measure of likelihood of entry. Data on the approach rate of IAS are typically sparse, and the combinations of risk factors relating to country of origin and port of arrival diverse. This presents challenges to making formal statistical inference on establishment likelihood. Here we demonstrate how these challenges can be overcome with judicious use of mixed-effects models when estimating the incursion likelihood into Australia of the European (Apis mellifera) and Asian (A. cerana) honeybees, along with the invasive parasites of biosecurity concern they host (e.g., Varroa destructor). Our results demonstrate how skewed the establishment likelihood is, with one-tenth of the ports accounting for 80% or more of the likelihood for both species. These results have been utilized by biosecurity agencies in the allocation of resources to the surveillance of maritime ports. PMID:26482012

  19. Estimation of muscle activity using higher-order derivatives, static optimization, and forward-inverse dynamics.

    PubMed

    Yamasaki, Taiga; Idehara, Katsutoshi; Xin, Xin

    2016-07-01

    We propose a new method to estimate muscle activity in a straightforward manner with high accuracy and relatively small computational costs by using the external input of the joint angle and its first to fourth derivatives with respect to time. The method solves the inverse dynamics problem of the skeletal system, the forward dynamics problem of the muscular system, and the load-sharing problem of muscles as a static optimization of neural excitation signals. The external input including the higher-order derivatives is required for a calculation of constraints imposed on the load-sharing problem. The feasibility of the method is demonstrated by the simulation of a simple musculoskeletal model with a single joint. Moreover, the influences of the muscular dynamics, and the higher-order derivatives on the estimation of the muscle activity are demonstrated, showing the results when the time constants of the activation dynamics are very small, and the third and fourth derivatives of the external input are ignored, respectively. It is concluded that the method can have the potential to improve estimation accuracy of muscle activity of highly dynamic motions. PMID:27211782

  20. Contradicting Estimates of Location, Geometry, and Rupture History of Highly Active Faults in Central Japan

    NASA Astrophysics Data System (ADS)

    Okumura, K.

    2011-12-01

    Accurate location and geometry of seismic sources are critical to estimate strong ground motion. Complete and precise rupture history is also critical to estimate the probability of the future events. In order to better forecast future earthquakes and to reduce seismic hazards, we should consider over all options and choose the most likely parameter. Multiple options for logic trees are acceptable only after thorough examination of contradicting estimates and should not be a result from easy compromise or epoche. In the process of preparation and revisions of Japanese probabilistic and deterministic earthquake hazard maps by Headquarters for Earthquake Research Promotion since 1996, many decisions were made to select plausible parameters, but many contradicting estimates have been left without thorough examinations. There are several highly-active faults in central Japan such as Itoigawa-Shizuoka Tectonic Line active fault system (ISTL), West Nagano Basin fault system (WNBF), Inadani fault system (INFS), and Atera fault system (ATFS). The highest slip rate and the shortest recurrence interval are respectively ~1 cm/yr and 500 to 800 years, and estimated maximum magnitude is 7.5 to 8.5. Those faults are very hazardous because almost entire population and industries are located above the fault within tectonic depressions. As to the fault location, most uncertainties arises from interpretation of geomorphic features. Geomorphological interpretation without geological and structural insight often leads to wrong mapping. Though non-existent longer fault may be a safer estimate, incorrectness harm reliability of the forecast. Also this does not greatly affect strong motion estimates, but misleading to surface displacement issues. Fault geometry, on the other hand, is very important to estimate intensity distribution. For the middle portion of the ISTL, fast-moving left-lateral strike-slip up to 1 cm/yr is obvious. Recent seismicity possibly induced by 2011 Tohoku