Activation Likelihood Estimation meta-analysis revisited
Eickhoff, Simon B.; Bzdok, Danilo; Laird, Angela R.; Kurth, Florian; Fox, Peter T.
2011-01-01
A widely used technique for coordinate-based meta-analysis of neuroimaging data is activation likelihood estimation (ALE), which determines the convergence of foci reported from different experiments. ALE analysis involves modelling these foci as probability distributions whose width is based on empirical estimates of the spatial uncertainty due to the between-subject and between-template variability of neuroimaging data. ALE results are assessed against a null-distribution of random spatial association between experiments, resulting in random-effects inference. In the present revision of this algorithm, we address two remaining drawbacks of the previous algorithm. First, the assessment of spatial association between experiments was based on a highly time-consuming permutation test, which nevertheless entailed the danger of underestimating the right tail of the null-distribution. In this report, we outline how this previous approach may be replaced by a faster and more precise analytical method. Second, the previously applied correction procedure, i.e. controlling the false discovery rate (FDR), is supplemented by new approaches for correcting the family-wise error rate and the cluster-level significance. The different alternatives for drawing inference on meta-analytic results are evaluated on an exemplary dataset on face perception as well as discussed with respect to their methodological limitations and advantages. In summary, we thus replaced the previous permutation algorithm with a faster and more rigorous analytical solution for the null-distribution and comprehensively address the issue of multiple-comparison corrections. The proposed revision of the ALE-algorithm should provide an improved tool for conducting coordinate-based meta-analyses on functional imaging data. PMID:21963913
Gianfrancesco, M A; Balzer, L; Taylor, K E; Trupin, L; Nititham, J; Seldin, M F; Singer, A W; Criswell, L A; Barcellos, L F
2016-09-01
Systemic lupus erythematous (SLE) is a chronic autoimmune disease associated with genetic and environmental risk factors. However, the extent to which genetic risk is causally associated with disease activity is unknown. We utilized longitudinal-targeted maximum likelihood estimation to estimate the causal association between a genetic risk score (GRS) comprising 41 established SLE variants and clinically important disease activity as measured by the validated Systemic Lupus Activity Questionnaire (SLAQ) in a multiethnic cohort of 942 individuals with SLE. We did not find evidence of a clinically important SLAQ score difference (>4.0) for individuals with a high GRS compared with those with a low GRS across nine time points after controlling for sex, ancestry, renal status, dialysis, disease duration, treatment, depression, smoking and education, as well as time-dependent confounding of missing visits. Individual single-nucleotide polymorphism (SNP) analyses revealed that 12 of the 41 variants were significantly associated with clinically relevant changes in SLAQ scores across time points eight and nine after controlling for multiple testing. Results based on sophisticated causal modeling of longitudinal data in a large patient cohort suggest that individual SLE risk variants may influence disease activity over time. Our findings also emphasize a role for other biological or environmental factors. PMID:27467283
Barquero, Laura A; Davis, Nicole; Cutting, Laurie E
2014-01-01
A growing number of studies examine instructional training and brain activity. The purpose of this paper is to review the literature regarding neuroimaging of reading intervention, with a particular focus on reading difficulties (RD). To locate relevant studies, searches of peer-reviewed literature were conducted using electronic databases to search for studies from the imaging modalities of fMRI and MEG (including MSI) that explored reading intervention. Of the 96 identified studies, 22 met the inclusion criteria for descriptive analysis. A subset of these (8 fMRI experiments with post-intervention data) was subjected to activation likelihood estimate (ALE) meta-analysis to investigate differences in functional activation following reading intervention. Findings from the literature review suggest differences in functional activation of numerous brain regions associated with reading intervention, including bilateral inferior frontal, superior temporal, middle temporal, middle frontal, superior frontal, and postcentral gyri, as well as bilateral occipital cortex, inferior parietal lobules, thalami, and insulae. Findings from the meta-analysis indicate change in functional activation following reading intervention in the left thalamus, right insula/inferior frontal, left inferior frontal, right posterior cingulate, and left middle occipital gyri. Though these findings should be interpreted with caution due to the small number of studies and the disparate methodologies used, this paper is an effort to synthesize across studies and to guide future exploration of neuroimaging and reading intervention.
Eickhoff, Simon B; Nichols, Thomas E; Laird, Angela R; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Claudia R
2016-08-15
Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. PMID:27179606
Gray matter atrophy in narcolepsy: An activation likelihood estimation meta-analysis.
Weng, Hsu-Huei; Chen, Chih-Feng; Tsai, Yuan-Hsiung; Wu, Chih-Ying; Lee, Meng; Lin, Yu-Ching; Yang, Cheng-Ta; Tsai, Ying-Huang; Yang, Chun-Yuh
2015-12-01
The authors reviewed the literature on the use of voxel-based morphometry (VBM) in narcolepsy magnetic resonance imaging (MRI) studies via the use of a meta-analysis of neuroimaging to identify concordant and specific structural deficits in patients with narcolepsy as compared with healthy subjects. We used PubMed to retrieve articles published between January 2000 and March 2014. The authors included all VBM research on narcolepsy and compared the findings of the studies by using gray matter volume (GMV) or gray matter concentration (GMC) to index differences in gray matter. Stereotactic data were extracted from 8 VBM studies of 149 narcoleptic patients and 162 control subjects. We applied activation likelihood estimation (ALE) technique and found significant regional gray matter reduction in the bilateral hypothalamus, thalamus, globus pallidus, extending to nucleus accumbens (NAcc) and anterior cingulate cortex (ACC), left mid orbital and rectal gyri (BAs 10 and 11), right inferior frontal gyrus (BA 47), and the right superior temporal gyrus (BA 41) in patients with narcolepsy. The significant gray matter deficits in narcoleptic patients occurred in the bilateral hypothalamus and frontotemporal regions, which may be related to the emotional processing abnormalities and orexin/hypocretin pathway common among populations of patients with narcolepsy.
Araujo, Helder F; Kaplan, Jonas; Damasio, Antonio
2013-09-04
The autobiographical-self refers to a mental state derived from the retrieval and assembly of memories regarding one's biography. The process of retrieval and assembly, which can focus on biographical facts or personality traits or some combination thereof, is likely to vary according to the domain chosen for an experiment. To date, the investigation of the neural basis of this process has largely focused on the domain of personality traits using paradigms that contrasted the evaluation of one's traits (self-traits) with those of another person's (other-traits). This has led to the suggestion that cortical midline structures (CMSs) are specifically related to self states. Here, with the goal of testing this suggestion, we conducted activation-likelihood estimation (ALE) meta-analyses based on data from 28 neuroimaging studies. The ALE results show that both self-traits and other-traits engage CMSs; however, the engagement of medial prefrontal cortex is greater for self-traits than for other-traits, while the posteromedial cortex is more engaged for other-traits than for self-traits. These findings suggest that the involvement CMSs is not specific to the evaluation of one's own traits, but also occurs during the evaluation of another person's traits.
Araujo, Helder F.; Kaplan, Jonas; Damasio, Antonio
2013-01-01
The autobiographical-self refers to a mental state derived from the retrieval and assembly of memories regarding one’s biography. The process of retrieval and assembly, which can focus on biographical facts or personality traits or some combination thereof, is likely to vary according to the domain chosen for an experiment. To date, the investigation of the neural basis of this process has largely focused on the domain of personality traits using paradigms that contrasted the evaluation of one’s traits (self-traits) with those of another person’s (other-traits). This has led to the suggestion that cortical midline structures (CMSs) are specifically related to self states. Here, with the goal of testing this suggestion, we conducted activation-likelihood estimation (ALE) meta-analyses based on data from 28 neuroimaging studies. The ALE results show that both self-traits and other-traits engage CMSs; however, the engagement of medial prefrontal cortex is greater for self-traits than for other-traits, while the posteromedial cortex is more engaged for other-traits than for self-traits. These findings suggest that the involvement CMSs is not specific to the evaluation of one’s own traits, but also occurs during the evaluation of another person’s traits. PMID:24027520
Wu, Xin; Yang, Wenjing; Tong, Dandan; Sun, Jiangzhou; Chen, Qunlin; Wei, Dongtao; Zhang, Qinglin; Zhang, Meng; Qiu, Jiang
2015-07-01
In this study, an activation likelihood estimation (ALE) meta-analysis was used to conduct a quantitative investigation of neuroimaging studies on divergent thinking. Based on the ALE results, the functional magnetic resonance imaging (fMRI) studies showed that distributed brain regions were more active under divergent thinking tasks (DTTs) than those under control tasks, but a large portion of the brain regions were deactivated. The ALE results indicated that the brain networks of the creative idea generation in DTTs may be composed of the lateral prefrontal cortex, posterior parietal cortex [such as the inferior parietal lobule (BA 40) and precuneus (BA 7)], anterior cingulate cortex (ACC) (BA 32), and several regions in the temporal cortex [such as the left middle temporal gyrus (BA 39), and left fusiform gyrus (BA 37)]. The left dorsolateral prefrontal cortex (BA 46) was related to selecting the loosely and remotely associated concepts and organizing them into creative ideas, whereas the ACC (BA 32) was related to observing and forming distant semantic associations in performing DTTs. The posterior parietal cortex may be involved in the semantic information related to the retrieval and buffering of the formed creative ideas, and several regions in the temporal cortex may be related to the stored long-term memory. In addition, the ALE results of the structural studies showed that divergent thinking was related to the dopaminergic system (e.g., left caudate and claustrum). Based on the ALE results, both fMRI and structural MRI studies could uncover the neural basis of divergent thinking from different aspects (e.g., specific cognitive processing and stable individual difference of cognitive capability).
ERIC Educational Resources Information Center
Adank, Patti
2012-01-01
The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…
Silverman, Merav H; Jedd, Kelly; Luciana, Monica
2015-11-15
Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: (1) confirm the network of brain regions involved in adolescents' reward processing, (2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and (3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587
Turesky, Ted K.; Turkeltaub, Peter E.; Eden, Guinevere F.
2016-01-01
The functional neuroanatomy of finger movements has been characterized with neuroimaging in young adults. However, less is known about the aging motor system. Several studies have contrasted movement-related activity in older versus young adults, but there is inconsistency among their findings. To address this, we conducted an activation likelihood estimation (ALE) meta-analysis on within-group data from older adults and young adults performing regularly paced right-hand finger movement tasks in response to external stimuli. We hypothesized that older adults would show a greater likelihood of activation in right cortical motor areas (i.e., ipsilateral to the side of movement) compared to young adults. ALE maps were examined for conjunction and between-group differences. Older adults showed overlapping likelihoods of activation with young adults in left primary sensorimotor cortex (SM1), bilateral supplementary motor area, bilateral insula, left thalamus, and right anterior cerebellum. Their ALE map differed from that of the young adults in right SM1 (extending into dorsal premotor cortex), right supramarginal gyrus, medial premotor cortex, and right posterior cerebellum. The finding that older adults uniquely use ipsilateral regions for right-hand finger movements and show age-dependent modulations in regions recruited by both age groups provides a foundation by which to understand age-related motor decline and motor disorders. PMID:27799910
LaCroix, Arianna N; Diaz, Alvaro F; Rogalsky, Corianne
2015-01-01
The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music. PMID:26321976
LaCroix, Arianna N.; Diaz, Alvaro F.; Rogalsky, Corianne
2015-01-01
The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music. PMID:26321976
Kumar, Poornima; Eickhoff, Simon B.; Dombrovski, Alexandre Y.
2015-01-01
Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments – prediction error – is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies suggest that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that employed algorithmic reinforcement learning models, across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, while instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually-estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies. PMID:25665667
Likelihood estimation in image warping
NASA Astrophysics Data System (ADS)
Machado, Alexei M. C.; Campos, Mario F.; Gee, James C.
1999-05-01
The problem of matching two images can be posed as the search for a displacement field which assigns each point of one image to a point in the second image in such a way that a likelihood function is maximized ruled by topological constraints. Since the images may be acquired by different scanners, the intensity relationship between intensity levels is generally unknown. The matching problem is usually solved iteratively by optimization methods. The evaluation of each candidate solution is based on an objective function which favors smooth displacements that yield likely intensity matches. This paper is concerned with the construction of a likelihood function that is derived from the information contained in the data and is thus applicable to data acquired from an arbitrary scanner. The basic assumption of the method is that the pair of images to be matched is assumed to contain roughly the same proportion of tissues, which will be reflected in their gray-level histograms. Experiments with MRI images corrupted with strong non-linear intensity shading show the method's effectiveness for modeling intensity artifacts. Image matching can thus be made robust to a wide range of intensity degradations.
Rodd, Jennifer M; Vitello, Sylvia; Woollams, Anna M; Adank, Patti
2015-02-01
We conducted an Activation Likelihood Estimation (ALE) meta-analysis to identify brain regions that are recruited by linguistic stimuli requiring relatively demanding semantic or syntactic processing. We included 54 functional MRI studies that explicitly varied the semantic or syntactic processing load, while holding constant demands on earlier stages of processing. We included studies that introduced a syntactic/semantic ambiguity or anomaly, used a priming manipulation that specifically reduced the load on semantic/syntactic processing, or varied the level of syntactic complexity. The results confirmed the critical role of the posterior left Inferior Frontal Gyrus (LIFG) in semantic and syntactic processing. These results challenge models of sentence comprehension highlighting the role of anterior LIFG for semantic processing. In addition, the results emphasise the posterior (but not anterior) temporal lobe for both semantic and syntactic processing.
Budde, Kristin S; Barron, Daniel S; Fox, Peter T
2014-12-01
Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as "neural signatures of stuttering" (Brown, Ingham, Ingham, Laird, & Fox, 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: (1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and (2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state).
Budde, Kristin S.; Barron, Daniel S.; Fox, Peter T.
2015-01-01
Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as “neural signatures of stuttering” (Brown 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: 1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and 2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state). PMID:25463820
Ishibashi, Ryo; Pobric, Gorana; Saito, Satoru; Lambon Ralph, Matthew A
2016-01-01
The ability to recognize and use a variety of tools is an intriguing human cognitive function. Multiple neuroimaging studies have investigated neural activations with various types of tool-related tasks. In the present paper, we reviewed tool-related neural activations reported in 70 contrasts from 56 neuroimaging studies and performed a series of activation likelihood estimation (ALE) meta-analyses to identify tool-related cortical circuits dedicated either to general tool knowledge or to task-specific processes. The results indicate the following: (a) Common, task-general processing regions for tools are located in the left inferior parietal lobule (IPL) and ventral premotor cortex; and (b) task-specific regions are located in superior parietal lobule (SPL) and dorsal premotor area for imagining/executing actions with tools and in bilateral occipito-temporal cortex for recognizing/naming tools. The roles of these regions in task-general and task-specific activities are discussed with reference to evidence from neuropsychology, experimental psychology and other neuroimaging studies. PMID:27362967
Tomasino, Barbara; Gremese, Michele
2016-01-01
We can predict how an object would look like if we were to see it from different viewpoints. The brain network governing mental rotation (MR) has been studied using a variety of stimuli and tasks instructions. By using activation likelihood estimation (ALE) meta-analysis we tested whether different MR networks can be modulated by the type of stimulus (body vs. non-body parts) or by the type of tasks instructions (motor imagery-based vs. non-motor imagery-based MR instructions). Testing for the bodily and non-bodily stimulus axis revealed a bilateral sensorimotor activation for bodily-related as compared to non-bodily-related stimuli and a posterior right lateralized activation for non-bodily-related as compared to bodily-related stimuli. A top-down modulation of the network was exerted by the MR tasks instructions with a bilateral (preferentially sensorimotor left) network for motor imagery- vs. non-motor imagery-based MR instructions and the latter activating a preferentially posterior right occipito-temporal-parietal network. The present quantitative meta-analysis summarizes and amends previous descriptions of the brain network related to MR and shows how it is modulated by top-down and bottom-up experimental factors. PMID:26779003
Ishibashi, Ryo; Pobric, Gorana; Saito, Satoru; Lambon Ralph, Matthew A.
2016-01-01
ABSTRACT The ability to recognize and use a variety of tools is an intriguing human cognitive function. Multiple neuroimaging studies have investigated neural activations with various types of tool-related tasks. In the present paper, we reviewed tool-related neural activations reported in 70 contrasts from 56 neuroimaging studies and performed a series of activation likelihood estimation (ALE) meta-analyses to identify tool-related cortical circuits dedicated either to general tool knowledge or to task-specific processes. The results indicate the following: (a) Common, task-general processing regions for tools are located in the left inferior parietal lobule (IPL) and ventral premotor cortex; and (b) task-specific regions are located in superior parietal lobule (SPL) and dorsal premotor area for imagining/executing actions with tools and in bilateral occipito-temporal cortex for recognizing/naming tools. The roles of these regions in task-general and task-specific activities are discussed with reference to evidence from neuropsychology, experimental psychology and other neuroimaging studies. PMID:27362967
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).
Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).
Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies.
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)
Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128
MAXIMUM LIKELIHOOD ESTIMATION FOR SOCIAL NETWORK DYNAMICS
Snijders, Tom A.B.; Koskinen, Johan; Schweinberger, Michael
2014-01-01
A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuous-time Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The model for tie changes is parametric and designed for applications to social network analysis, where the network dynamics can be interpreted as being generated by choices made by the social actors represented by the nodes of the graph. An algorithm for calculating the Maximum Likelihood estimator is presented, based on data augmentation and stochastic approximation. An application to an evolving friendship network is given and a small simulation study is presented which suggests that for small data sets the Maximum Likelihood estimator is more efficient than the earlier proposed Method of Moments estimator. PMID:25419259
Diffusion Tensor Estimation by Maximizing Rician Likelihood
Landman, Bennett; Bazin, Pierre-Louis; Prince, Jerry
2012-01-01
Diffusion tensor imaging (DTI) is widely used to characterize white matter in health and disease. Previous approaches to the estimation of diffusion tensors have either been statistically suboptimal or have used Gaussian approximations of the underlying noise structure, which is Rician in reality. This can cause quantities derived from these tensors — e.g., fractional anisotropy and apparent diffusion coefficient — to diverge from their true values, potentially leading to artifactual changes that confound clinically significant ones. This paper presents a novel maximum likelihood approach to tensor estimation, denoted Diffusion Tensor Estimation by Maximizing Rician Likelihood (DTEMRL). In contrast to previous approaches, DTEMRL considers the joint distribution of all observed data in the context of an augmented tensor model to account for variable levels of Rician noise. To improve numeric stability and prevent non-physical solutions, DTEMRL incorporates a robust characterization of positive definite tensors and a new estimator of underlying noise variance. In simulated and clinical data, mean squared error metrics show consistent and significant improvements from low clinical SNR to high SNR. DTEMRL may be readily supplemented with spatial regularization or a priori tensor distributions for Bayesian tensor estimation. PMID:23132746
Maximum Likelihood Estimation of Multivariate Polyserial and Polychoric Correlation Coefficients.
ERIC Educational Resources Information Center
Poon, Wai-Yin; Lee, Sik-Yum
1987-01-01
Reparameterization is used to find the maximum likelihood estimates of parameters in a multivariate model having some component variable observable only in polychotomous form. Maximum likelihood estimates are found by a Fletcher Powell algorithm. In addition, the partition maximum likelihood method is proposed and illustrated. (Author/GDC)
The relative performance of targeted maximum likelihood estimators.
Porter, Kristin E; Gruber, Susan; van der Laan, Mark J; Sekhon, Jasjeet S
2011-01-01
There is an active debate in the literature on censored data about the relative performance of model based maximum likelihood estimators, IPCW-estimators, and a variety of double robust semiparametric efficient estimators. Kang and Schafer (2007) demonstrate the fragility of double robust and IPCW-estimators in a simulation study with positivity violations. They focus on a simple missing data problem with covariates where one desires to estimate the mean of an outcome that is subject to missingness. Responses by Robins, et al. (2007), Tsiatis and Davidian (2007), Tan (2007) and Ridgeway and McCaffrey (2007) further explore the challenges faced by double robust estimators and offer suggestions for improving their stability. In this article, we join the debate by presenting targeted maximum likelihood estimators (TMLEs). We demonstrate that TMLEs that guarantee that the parametric submodel employed by the TMLE procedure respects the global bounds on the continuous outcomes, are especially suitable for dealing with positivity violations because in addition to being double robust and semiparametric efficient, they are substitution estimators. We demonstrate the practical performance of TMLEs relative to other estimators in the simulations designed by Kang and Schafer (2007) and in modified simulations with even greater estimation challenges. PMID:21931570
ROBUST MAXIMUM LIKELIHOOD ESTIMATION IN Q-SPACE MRI.
Landman, B A; Farrell, J A D; Smith, S A; Calabresi, P A; van Zijl, P C M; Prince, J L
2008-05-14
Q-space imaging is an emerging diffusion weighted MR imaging technique to estimate molecular diffusion probability density functions (PDF's) without the need to assume a Gaussian distribution. We present a robust M-estimator, Q-space Estimation by Maximizing Rician Likelihood (QEMRL), for diffusion PDF's based on maximum likelihood. PDF's are modeled by constrained Gaussian mixtures. In QEMRL, robust likelihood measures mitigate the impacts of imaging artifacts. In simulation and in vivo human spinal cord, the method improves reliability of estimated PDF's and increases tissue contrast. QEMRL enables more detailed exploration of the PDF properties than prior approaches and may allow acquisitions at higher spatial resolution.
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.
Zou, Hui; Li, Runze
2008-08-01
Fan & Li (2001) propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective function is nondifferentiable and nonconcave. In this article we propose a new unified algorithm based on the local linear approximation (LLA) for maximizing the penalized likelihood for a broad class of concave penalty functions. Convergence and other theoretical properties of the LLA algorithm are established. A distinguished feature of the LLA algorithm is that at each LLA step, the LLA estimator can naturally adopt a sparse representation. Thus we suggest using the one-step LLA estimator from the LLA algorithm as the final estimates. Statistically, we show that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties with good initial estimators. Computationally, the one-step LLA estimation methods dramatically reduce the computational cost in maximizing the nonconcave penalized likelihood. We conduct some Monte Carlo simulation to assess the finite sample performance of the one-step sparse estimation methods. The results are very encouraging.
Deng, Yanjia; Shi, Lin; Lei, Yi; Liang, Peipeng; Li, Kuncheng; Chu, Winnie C. W.; Wang, Defeng
2016-01-01
The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing “what” and “where” visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in “what” and “where” visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three “where” visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both “what” and “where” visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses. PMID:27445770
Nonparametric maximum likelihood estimation for the multisample Wicksell corpuscle problem
Chan, Kwun Chuen Gary; Qin, Jing
2016-01-01
We study nonparametric maximum likelihood estimation for the distribution of spherical radii using samples containing a mixture of one-dimensional, two-dimensional biased and three-dimensional unbiased observations. Since direct maximization of the likelihood function is intractable, we propose an expectation-maximization algorithm for implementing the estimator, which handles an indirect measurement problem and a sampling bias problem separately in the E- and M-steps, and circumvents the need to solve an Abel-type integral equation, which creates numerical instability in the one-sample problem. Extensions to ellipsoids are studied and connections to multiplicative censoring are discussed. PMID:27279657
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
A Generalized, Likelihood-Free Method for Posterior Estimation
Turner, Brandon M.; Sederberg, Per B.
2014-01-01
Recent advancements in Bayesian modeling have allowed for likelihood-free posterior estimation. Such estimation techniques are crucial to the understanding of simulation-based models, whose likelihood functions may be difficult or even impossible to derive. However, current approaches are limited by their dependence on sufficient statistics and/or tolerance thresholds. In this article, we provide a new approach that requires no summary statistics, error terms, or thresholds, and is generalizable to all models in psychology that can be simulated. We use our algorithm to fit a variety of cognitive models with known likelihood functions to ensure the accuracy of our approach. We then apply our method to two real-world examples to illustrate the types of complex problems our method solves. In the first example, we fit an error-correcting criterion model of signal detection, whose criterion dynamically adjusts after every trial. We then fit two models of choice response time to experimental data: the Linear Ballistic Accumulator model, which has a known likelihood, and the Leaky Competing Accumulator model whose likelihood is intractable. The estimated posterior distributions of the two models allow for direct parameter interpretation and model comparison by means of conventional Bayesian statistics – a feat that was not previously possible. PMID:24258272
Maximum-likelihood estimation of admixture proportions from genetic data.
Wang, Jinliang
2003-01-01
For an admixed population, an important question is how much genetic contribution comes from each parental population. Several methods have been developed to estimate such admixture proportions, using data on genetic markers sampled from parental and admixed populations. In this study, I propose a likelihood method to estimate jointly the admixture proportions, the genetic drift that occurred to the admixed population and each parental population during the period between the hybridization and sampling events, and the genetic drift in each ancestral population within the interval between their split and hybridization. The results from extensive simulations using various combinations of relevant parameter values show that in general much more accurate and precise estimates of admixture proportions are obtained from the likelihood method than from previous methods. The likelihood method also yields reasonable estimates of genetic drift that occurred to each population, which translate into relative effective sizes (N(e)) or absolute average N(e)'s if the times when the relevant events (such as population split, admixture, and sampling) occurred are known. The proposed likelihood method also has features such as relatively low computational requirement compared with previous ones, flexibility for admixture models, and marker types. In particular, it allows for missing data from a contributing parental population. The method is applied to a human data set and a wolflike canids data set, and the results obtained are discussed in comparison with those from other estimators and from previous studies. PMID:12807794
A maximum likelihood approach to estimating correlation functions
Baxter, Eric Jones; Rozo, Eduardo
2013-12-10
We define a maximum likelihood (ML for short) estimator for the correlation function, ξ, that uses the same pair counting observables (D, R, DD, DR, RR) as the standard Landy and Szalay (LS for short) estimator. The ML estimator outperforms the LS estimator in that it results in smaller measurement errors at any fixed random point density. Put another way, the ML estimator can reach the same precision as the LS estimator with a significantly smaller random point catalog. Moreover, these gains are achieved without significantly increasing the computational requirements for estimating ξ. We quantify the relative improvement of the ML estimator over the LS estimator and discuss the regimes under which these improvements are most significant. We present a short guide on how to implement the ML estimator and emphasize that the code alterations required to switch from an LS to an ML estimator are minimal.
Maximum likelihood estimation applied to multiepoch MEG/EEG analysis
NASA Astrophysics Data System (ADS)
Baryshnikov, Boris V.
A maximum likelihood based algorithm for reducing the effects of spatially colored noise in evoked response MEG and EEG experiments is presented. The signal of interest is modeled as the low rank mean, while the noise is modeled as a Kronecker product of spatial and temporal covariance matrices. The temporal covariance is assumed known, while the spatial covariance is estimated as part of the algorithm. In contrast to prestimulus based whitening followed by principal component analysis, our algorithm does not require signal-free data for noise whitening and thus is more effective with non-stationary noise and produces better quality whitening for a given data record length. The efficacy of this approach is demonstrated using simulated and real MEG data. Next, a study in which we characterize MEG cortical response to coherent vs. incoherent motion is presented. It was found that coherent motion of the object induces not only an early sensory response around 180 ms relative to the stimulus onset but also a late field in the 250--500 ms range that has not been observed previously in similar random dot kinematogram experiments. The late field could not be resolved without signal processing using the maximum likelihood algorithm. The late activity localized to parietal areas. This is what would be expected. We believe that the late field corresponds to higher order processing related to the recognition of the moving object against the background. Finally, a maximum likelihood based dipole fitting algorithm is presented. It is suitable for dipole fitting of evoked response MEG data in the presence of spatially colored noise. The method exploits the temporal multiepoch structure of the evoked response data to estimate the spatial noise covariance matrix from the section of data being fit, eliminating the stationarity assumption implicit in prestimulus based whitening approaches. The preliminary results of the application of this algorithm to the simulated data show its
Quasi-likelihood estimation for relative risk regression models.
Carter, Rickey E; Lipsitz, Stuart R; Tilley, Barbara C
2005-01-01
For a prospective randomized clinical trial with two groups, the relative risk can be used as a measure of treatment effect and is directly interpretable as the ratio of success probabilities in the new treatment group versus the placebo group. For a prospective study with many covariates and a binary outcome (success or failure), relative risk regression may be of interest. If we model the log of the success probability as a linear function of covariates, the regression coefficients are log-relative risks. However, using such a log-linear model with a Bernoulli likelihood can lead to convergence problems in the Newton-Raphson algorithm. This is likely to occur when the success probabilities are close to one. A constrained likelihood method proposed by Wacholder (1986, American Journal of Epidemiology 123, 174-184), also has convergence problems. We propose a quasi-likelihood method of moments technique in which we naively assume the Bernoulli outcome is Poisson, with the mean (success probability) following a log-linear model. We use the Poisson maximum likelihood equations to estimate the regression coefficients without constraints. Using method of moment ideas, one can show that the estimates using the Poisson likelihood will be consistent and asymptotically normal. We apply these methods to a double-blinded randomized trial in primary biliary cirrhosis of the liver (Markus et al., 1989, New England Journal of Medicine 320, 1709-1713). PMID:15618526
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
A maximum-likelihood estimation of pairwise relatedness for autopolyploids
Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G
2015-01-01
Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210
Approximated maximum likelihood estimation in multifractal random walks
NASA Astrophysics Data System (ADS)
Løvsletten, O.; Rypdal, M.
2012-04-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
2013-01-01
Background The results of multiple studies on the association between antipsychotic use and structural brain changes in schizophrenia have been assessed only in qualitative literature reviews to date. We aimed to perform a meta-analysis of voxel-based morphometry (VBM) studies on this association to quantitatively synthesize the findings of these studies. Methods A systematic computerized literature search was carried out through MEDLINE/PubMed, EMBASE, ISI Web of Science, SCOPUS and PsycINFO databases aiming to identify all VBM studies addressing this question and meeting predetermined inclusion criteria. All studies reporting coordinates representing foci of structural brain changes associated with antipsychotic use were meta-analyzed by using the activation likelihood estimation technique, currently the most sophisticated and best-validated tool for voxel-wise meta-analysis of neuroimaging studies. Results Ten studies (five cross-sectional and five longitudinal) met the inclusion criteria and comprised a total of 548 individuals (298 patients on antipsychotic drugs and 250 controls). Depending on the methodologies of the selected studies, the control groups included healthy subjects, drug-free patients, or the same patients evaluated repeatedly in longitudinal comparisons (i.e., serving as their own controls). A total of 102 foci associated with structural alterations were retrieved. The meta-analysis revealed seven clusters of areas with consistent structural brain changes in patients on antipsychotics compared to controls. The seven clusters included four areas of relative volumetric decrease in the left lateral temporal cortex [Brodmann area (BA) 20], left inferior frontal gyrus (BA 44), superior frontal gyrus extending to the left middle frontal gyrus (BA 6), and right rectal gyrus (BA 11), and three areas of relative volumetric increase in the left dorsal anterior cingulate cortex (BA 24), left ventral anterior cingulate cortex (BA 24) and right putamen
Maximum-likelihood estimation of gene location by linkage disequilibrium
Hill, W.G. ); Weir, B.S. )
1994-04-01
Linkage disequilibrium, D, between a polymorphic disease and mapped markers can, in principle, be used to help find the map position of the disease gene. Likelihoods are therefore derived for the value of D conditional on the observed number of haplotypes in the sample and on the population parameter Nc, where N is the effective population size and c the recombination fraction between the disease and marker loci. The likelihood is computed explicitly for the case of two loci with heterozygote superiority and, more generally, by computer simulations assuming a steady state of constant population size and selective pressures or neutrality. It is found that the likelihood is, in general, not very dependent on the degree of selection at the loci and is very flat. This suggests that precise information on map position will not be obtained from estimates of linkage disequilibrium. 15 refs., 5 figs., 21 tabs.
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
NASA Astrophysics Data System (ADS)
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
NASA Astrophysics Data System (ADS)
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Skewness for Maximum Likelihood Estimators of the Negative Binomial Distribution
Bowman, Kimiko o
2007-01-01
The probability generating function of one version of the negative binomial distribution being (p + 1 - pt){sup -k}, we study elements of the Hessian and in particular Fisher's discovery of a series form for the variance of k, the maximum likelihood estimator, and also for the determinant of the Hessian. There is a link with the Psi function and its derivatives. Basic algebra is excessively complicated and a Maple code implementation is an important task in the solution process. Low order maximum likelihood moments are given and also Fisher's examples relating to data associated with ticks on sheep. Efficiency of moment estimators is mentioned, including the concept of joint efficiency. In an Addendum we give an interesting formula for the difference of two Psi functions.
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062
Digital combining-weight estimation for broadband sources using maximum-likelihood estimates
NASA Technical Reports Server (NTRS)
Rodemich, E. R.; Vilnrotter, V. A.
1994-01-01
An algorithm described for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system is compared with the maximum-likelihood estimate. This provides some improvement in performance, with an increase in computational complexity. However, the maximum-likelihood algorithm is simple enough to allow implementation on a PC-based combining system.
Approximate maximum likelihood estimation of scanning observer templates
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.
2015-03-01
In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.
MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
NASA Astrophysics Data System (ADS)
Ariffin, Syaiba Balqish; Midi, Habshah; Arasan, Jayanthi; Rana, Md Sohel
2015-02-01
This article is concerned with the performance of the maximum estimated likelihood estimator in the presence of separation in the space of the independent variables and high leverage points. The maximum likelihood estimator suffers from the problem of non overlap cases in the covariates where the regression coefficients are not identifiable and the maximum likelihood estimator does not exist. Consequently, iteration scheme fails to converge and gives faulty results. To remedy this problem, the maximum estimated likelihood estimator is put forward. It is evident that the maximum estimated likelihood estimator is resistant against separation and the estimates always exist. The effect of high leverage points are then investigated on the performance of maximum estimated likelihood estimator through real data sets and Monte Carlo simulation study. The findings signify that the maximum estimated likelihood estimator fails to provide better parameter estimates in the presence of both separation, and high leverage points.
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374
Effects of parameter estimation on maximum-likelihood bootstrap analysis.
Ripplinger, Jennifer; Abdo, Zaid; Sullivan, Jack
2010-08-01
Bipartition support in maximum-likelihood (ML) analysis is most commonly assessed using the nonparametric bootstrap. Although bootstrap replicates should theoretically be analyzed in the same manner as the original data, model selection is almost never conducted for bootstrap replicates, substitution-model parameters are often fixed to their maximum-likelihood estimates (MLEs) for the empirical data, and bootstrap replicates may be subjected to less rigorous heuristic search strategies than the original data set. Even though this approach may increase computational tractability, it may also lead to the recovery of suboptimal tree topologies and affect bootstrap values. However, since well-supported bipartitions are often recovered regardless of method, use of a less intensive bootstrap procedure may not significantly affect the results. In this study, we investigate the impact of parameter estimation (i.e., assessment of substitution-model parameters and tree topology) on ML bootstrap analysis. We find that while forgoing model selection and/or setting substitution-model parameters to their empirical MLEs may lead to significantly different bootstrap values, it probably would not change their biological interpretation. Similarly, even though the use of reduced search methods often results in significant differences among bootstrap values, only omitting branch swapping is likely to change any biological inferences drawn from the data.
Maximum Likelihood Estimation of GEVD: Applications in Bioinformatics.
Thomas, Minta; Daemen, Anneleen; De Moor, Bart
2014-01-01
We propose a method, maximum likelihood estimation of generalized eigenvalue decomposition (MLGEVD) that employs a well known technique relying on the generalization of singular value decomposition (SVD). The main aim of the work is to show the tight equivalence between MLGEVD and generalized ridge regression. This relationship reveals an important mathematical property of GEVD in which the second argument act as prior information in the model. Thus we show that MLGEVD allows the incorporation of external knowledge about the quantities of interest into the estimation problem. We illustrate the importance of prior knowledge in clinical decision making/identifying differentially expressed genes with case studies for which microarray data sets with corresponding clinical/literature information are available. On all of these three case studies, MLGEVD outperformed GEVD on prediction in terms of test area under the ROC curve (test AUC). MLGEVD results in significantly improved diagnosis, prognosis and prediction of therapy response.
Stochastic Maximum Likelihood (SML) parametric estimation of overlapped Doppler echoes
NASA Astrophysics Data System (ADS)
Boyer, E.; Petitdidier, M.; Larzabal, P.
2004-11-01
This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML) estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC) in Arecibo, Puerto Rico, in 1998.
Maximum likelihood random galaxy catalogues and luminosity function estimation
NASA Astrophysics Data System (ADS)
Cole, Shaun
2011-09-01
We present a new algorithm to generate a random (unclustered) version of an magnitude limited observational galaxy redshift catalogue. It takes into account both galaxy evolution and the perturbing effects of large-scale structure. The key to the algorithm is a maximum likelihood (ML) method for jointly estimating both the luminosity function (LF) and the overdensity as a function of redshift. The random catalogue algorithm then works by cloning each galaxy in the original catalogue, with the number of clones determined by the ML solution. Each of these cloned galaxies is then assigned a random redshift uniformly distributed over the accessible survey volume, taking account of the survey magnitude limit(s) and, optionally, both luminosity and number density evolution. The resulting random catalogues, which can be employed in traditional estimates of galaxy clustering, make fuller use of the information available in the original catalogue and hence are superior to simply fitting a functional form to the observed redshift distribution. They are particularly well suited to studies of the dependence of galaxy clustering on galaxy properties as each galaxy in the random catalogue has the same list of attributes as measured for the galaxies in the genuine catalogue. The derivation of the joint overdensity and LF estimator reveals the limit in which the ML estimate reduces to the standard 1/Vmax LF estimate, namely when one makes the prior assumption that the are no fluctuations in the radial overdensity. The new ML estimator can be viewed as a generalization of the 1/Vmax estimate in which Vmax is replaced by a density corrected Vdc, max.
Maximum likelihood estimation for cytogenetic dose-response curves
Frome, E.L.; DuFrain, R.J.
1986-03-01
In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.
Nonparametric maximum likelihood estimation of probability densities by penalty function methods
NASA Technical Reports Server (NTRS)
Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.
1974-01-01
When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.
Performance-based selection of likelihood models for phylogeny estimation.
Minin, Vladimir; Abdo, Zaid; Joyce, Paul; Sullivan, Jack
2003-10-01
Phylogenetic estimation has largely come to rely on explicitly model-based methods. This approach requires that a model be chosen and that that choice be justified. To date, justification has largely been accomplished through use of likelihood-ratio tests (LRTs) to assess the relative fit of a nested series of reversible models. While this approach certainly represents an important advance over arbitrary model selection, the best fit of a series of models may not always provide the most reliable phylogenetic estimates for finite real data sets, where all available models are surely incorrect. Here, we develop a novel approach to model selection, which is based on the Bayesian information criterion, but incorporates relative branch-length error as a performance measure in a decision theory (DT) framework. This DT method includes a penalty for overfitting, is applicable prior to running extensive analyses, and simultaneously compares all models being considered and thus does not rely on a series of pairwise comparisons of models to traverse model space. We evaluate this method by examining four real data sets and by using those data sets to define simulation conditions. In the real data sets, the DT method selects the same or simpler models than conventional LRTs. In order to lend generality to the simulations, codon-based models (with parameters estimated from the real data sets) were used to generate simulated data sets, which are therefore more complex than any of the models we evaluate. On average, the DT method selects models that are simpler than those chosen by conventional LRTs. Nevertheless, these simpler models provide estimates of branch lengths that are more accurate both in terms of relative error and absolute error than those derived using the more complex (yet still wrong) models chosen by conventional LRTs. This method is available in a program called DT-ModSel. PMID:14530134
Maximum likelihood estimation for cytogenetic dose-response curves
Frome, E.L; DuFrain, R.J.
1983-10-01
In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2012-01-01
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
Byram, Brett; Trahey, Gregg E.; Palmeri, Mark
2012-01-01
A hallmark of clinical ultrasound has been accurate and precise displacement estimation. Displacement estimation accuracy has largely been considered to be limited by the Cramer-Rao lower bound (CRLB). However, the CRLB only describes the minimum variance obtainable from unbiased estimators. Unbiased estimators are generally implemented using Bayes’ theorem, which requires a likelihood function. The classic likelihood function for the displacement estimation problem is not discriminative and hard to implement for clinically relevant ultrasound with diffuse scattering. Since the classic likelihood function is not effective a perturbation is proposed. The proposed likelihood function was evaluated and compared against the classic likelihood function by converting both to posterior probability density functions (PDFs) using a non-informative prior. Example results are reported for bulk motion simulations using a 6λ tracking kernel and 30 dB SNR for 1000 data realizations. The canonical likelihood function assigned the true displacement a mean probability of only 0.070±0.020, while the new likelihood function assigned the true displacement a much higher probability of 0.22±0.16. The new likelihood function shows improvements at least for bulk motion, acoustic radiation force induced motion and compressive motion, and at least for SNRs greater than 10 dB and kernel lengths between 1.5 and 12λ. PMID:23287920
Byram, Brett; Trahey, Gregg E; Palmeri, Mark
2013-01-01
Accurate and precise displacement estimation has been a hallmark of clinical ultrasound. Displacement estimation accuracy has largely been considered to be limited by the Cramer-Rao lower bound (CRLB). However, the CRLB only describes the minimum variance obtainable from unbiased estimators. Unbiased estimators are generally implemented using Bayes' theorem, which requires a likelihood function. The classic likelihood function for the displacement estimation problem is not discriminative and is difficult to implement for clinically relevant ultrasound with diffuse scattering. Because the classic likelihood function is not effective, a perturbation is proposed. The proposed likelihood function was evaluated and compared against the classic likelihood function by converting both to posterior probability density functions (PDFs) using a noninformative prior. Example results are reported for bulk motion simulations using a 6λ tracking kernel and 30 dB SNR for 1000 data realizations. The canonical likelihood function assigned the true displacement a mean probability of only 0.070 ± 0.020, whereas the new likelihood function assigned the true displacement a much higher probability of 0.22 ± 0.16. The new likelihood function shows improvements at least for bulk motion, acoustic radiation force induced motion, and compressive motion, and at least for SNRs greater than 10 dB and kernel lengths between 1.5 and 12λ. PMID:23287920
Building unbiased estimators from non-gaussian likelihoods with application to shear estimation
Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; Slosar, Anze
2015-01-15
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore » of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g| = 0.2.« less
Building unbiased estimators from non-gaussian likelihoods with application to shear estimation
Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; Slosar, Anze
2015-01-15
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g| = 0.2.
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
A New Maximum Likelihood Estimator for the Population Squared Multiple Correlation.
ERIC Educational Resources Information Center
Alf, Edward F., Jr.; Graf, Richard G.
2002-01-01
Developed a new estimator for the population squared multiple correlation using maximum likelihood estimation. Data from 72 air control school graduates demonstrate that the new estimator has greater accuracy than other estimators with values that fall within the parameter space. (SLD)
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
NASA Astrophysics Data System (ADS)
Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.
2015-03-01
In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.
Recent developments in maximum likelihood estimation of MTMM models for categorical data
Jeon, Minjeong; Rijmen, Frank
2014-01-01
Maximum likelihood (ML) estimation of categorical multitrait-multimethod (MTMM) data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution. The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization (e.g., Rijmen and Jeon, 2013), alternating imputation posterior (e.g., Cho and Rabe-Hesketh, 2011), and Monte Carlo local likelihood (e.g., Jeon et al., under revision). Each method is briefly described and its applicability for MTMM models with categorical data are discussed. PMID:24782791
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
ERIC Educational Resources Information Center
Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon
2011-01-01
The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…
ERIC Educational Resources Information Center
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure
ERIC Educational Resources Information Center
Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M.
2014-01-01
Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…
NASA Technical Reports Server (NTRS)
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
Revising probability estimates: Why increasing likelihood means increasing impact.
Maglio, Sam J; Polman, Evan
2016-08-01
Forecasted probabilities rarely stay the same for long. Instead, they are subject to constant revision-moving upward or downward, uncertain events become more or less likely. Yet little is known about how people interpret probability estimates beyond static snapshots, like a 30% chance of rain. Here, we consider the cognitive, affective, and behavioral consequences of revisions to probability forecasts. Stemming from a lay belief that revisions signal the emergence of a trend, we find in 10 studies (comprising uncertain events such as weather, climate change, sex, sports, and wine) that upward changes to event-probability (e.g., increasing from 20% to 30%) cause events to feel less remote than downward changes (e.g., decreasing from 40% to 30%), and subsequently change people's behavior regarding those events despite the revised event-probabilities being the same. Our research sheds light on how revising the probabilities for future events changes how people manage those uncertain events. (PsycINFO Database Record PMID:27281350
ERIC Educational Resources Information Center
Andersen, Erling B.
2002-01-01
Presents a simple result concerning variances of maximum likelihood (ML) estimators. The result allows for construction of residual diagrams to evaluate whether ML estimators derived from independent samples can be assumed to be equal apart from random errors. Applies this result to the polytomous Rasch model. (SLD)
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Park, T
1993-09-30
Liang and Zeger proposed an extension of generalized linear models to the analysis of longitudinal data. Their approach is closely related to quasi-likelihood methods and can handle both normal and non-normal outcome variables such as Poisson or binary outcomes. Their approach, however, has been applied mainly to non-normal outcome variables. This is probably due to the fact that there is a large class of multivariate linear models available for normal outcomes such as growth models and random-effects models. Furthermore, there are many iterative algorithms that yield maximum likelihood estimators (MLEs) of the model parameters. The multivariate linear model approach, based on maximum likelihood (ML) estimation, specifies the joint multivariate normal distribution of outcome variables while the approach of Liang and Zeger, based on the quasi-likelihood, specifies only the marginal distributions. In this paper, I compare the approach of Liang and Zeger and the ML approach for the multivariate normal outcomes. I show that the generalized estimating equation (GEE) reduces to the score equation only when the data do not have missing observations and the correlation is unstructured. In more general cases, however, the GEE estimation yields consistent estimators that may differ from the MLEs. That is, the GEE does not always reduce to the score equation even when the outcome variables are multivariate normal. I compare the small sample properties of the GEE estimators and the MLEs by means of a Monte Carlo simulation study. PMID:8248664
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Mehra, R. K.
1974-01-01
This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.
Out-of-atlas likelihood estimation using multi-atlas segmentation
Asman, Andrew J.; Chambless, Lola B.; Thompson, Reid C.; Landman, Bennett A.
2013-01-01
Purpose: Multi-atlas segmentation has been shown to be highly robust and accurate across an extraordinary range of potential applications. However, it is limited to the segmentation of structures that are anatomically consistent across a large population of potential target subjects (i.e., multi-atlas segmentation is limited to “in-atlas” applications). Herein, the authors propose a technique to determine the likelihood that a multi-atlas segmentation estimate is representative of the problem at hand, and, therefore, identify anomalous regions that are not well represented within the atlases. Methods: The authors derive a technique to estimate the out-of-atlas (OOA) likelihood for every voxel in the target image. These estimated likelihoods can be used to determine and localize the probability of an abnormality being present on the target image. Results: Using a collection of manually labeled whole-brain datasets, the authors demonstrate the efficacy of the proposed framework on two distinct applications. First, the authors demonstrate the ability to accurately and robustly detect malignant gliomas in the human brain—an aggressive class of central nervous system neoplasms. Second, the authors demonstrate how this OOA likelihood estimation process can be used within a quality control context for diffusion tensor imaging datasets to detect large-scale imaging artifacts (e.g., aliasing and image shading). Conclusions: The proposed OOA likelihood estimation framework shows great promise for robust and rapid identification of brain abnormalities and imaging artifacts using only weak dependencies on anomaly morphometry and appearance. The authors envision that this approach would allow for application-specific algorithms to focus directly on regions of high OOA likelihood, which would (1) reduce the need for human intervention, and (2) reduce the propensity for false positives. Using the dual perspective, this technique would allow for algorithms to focus on
A Maximum-Likelihood Method for the Estimation of Pairwise Relatedness in Structured Populations
Anderson, Amy D.; Weir, Bruce S.
2007-01-01
A maximum-likelihood estimator for pairwise relatedness is presented for the situation in which the individuals under consideration come from a large outbred subpopulation of the population for which allele frequencies are known. We demonstrate via simulations that a variety of commonly used estimators that do not take this kind of misspecification of allele frequencies into account will systematically overestimate the degree of relatedness between two individuals from a subpopulation. A maximum-likelihood estimator that includes FST as a parameter is introduced with the goal of producing the relatedness estimates that would have been obtained if the subpopulation allele frequencies had been known. This estimator is shown to work quite well, even when the value of FST is misspecified. Bootstrap confidence intervals are also examined and shown to exhibit close to nominal coverage when FST is correctly specified. PMID:17339212
A conditional likelihood is required to estimate the selection coefficient in ancient DNA
Valleriani, Angelo
2016-01-01
Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct. PMID:27527811
A conditional likelihood is required to estimate the selection coefficient in ancient DNA
NASA Astrophysics Data System (ADS)
Valleriani, Angelo
2016-08-01
Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.
Maximum Likelihood Estimation of Factor Structures of Anxiety Measures: A Multiple Group Comparison.
ERIC Educational Resources Information Center
Kameoka, Velma A.; And Others
Confirmatory maximum likelihood estimation of measurement models was used to evaluate the construct generality of self-report measures of anxiety across male and female samples. These measures included Spielberger's State-Trait Anxiety Inventory, Taylor's Manifest Anxiety Scale, and two forms of Endler, Hunt and Rosenstein's S-R Inventory of…
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2010-01-01
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
ERIC Educational Resources Information Center
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks.
Lin, Lin; Yang, Chengfeng; Ma, Maode
2015-12-07
Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks.
Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks
Lin, Lin; Yang, Chengfeng; Ma, Maode
2015-01-01
Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173
A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.
Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming
2008-01-01
This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system. PMID:19162641
NASA Astrophysics Data System (ADS)
Fu, Qiang; Luk, Wai-Shing; Tao, Jun; Zeng, Xuan; Cai, Wei
In this paper, a novel intra-die spatial correlation extraction method referred to as MLEMTC (Maximum Likelihood Estimation for Multiple Test Chips) is presented. In the MLEMTC method, a joint likelihood function is formulated by multiplying the set of individual likelihood functions for all test chips. This joint likelihood function is then maximized to extract a unique group of parameter values of a single spatial correlation function, which can be used for statistical circuit analysis and design. Moreover, to deal with the purely random component and measurement error contained in measurement data, the spatial correlation function combined with the correlation of white noise is used in the extraction, which significantly improves the accuracy of the extraction results. Furthermore, an LU decomposition based technique is developed to calculate the log-determinant of the positive definite matrix within the likelihood function, which solves the numerical stability problem encountered in the direct calculation. Experimental results have shown that the proposed method is efficient and practical.
Yang, Shuying; De Angelis, Daniela
2013-01-01
The maximum likelihood method is a popular statistical inferential procedure widely used in many areas to obtain the estimates of the unknown parameters of a population of interest. This chapter gives a brief description of the important concepts underlying the maximum likelihood method, the definition of the key components, the basic theory of the method, and the properties of the resulting estimates. Confidence interval and likelihood ratio test are also introduced. Finally, a few examples of applications are given to illustrate how to derive maximum likelihood estimates in practice. A list of references to relevant papers and software for a further understanding of the method and its implementation is provided.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.
Meyer, Karin
2016-08-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.
NASA Astrophysics Data System (ADS)
Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F.; Thielemans, Kris
2016-02-01
This work is an extension of our recent work on joint activity reconstruction/motion estimation (JRM) from positron emission tomography (PET) data. We performed JRM by maximization of the penalized log-likelihood in which the probabilistic model assumes that the same motion field affects both the activity distribution and the attenuation map. Our previous results showed that JRM can successfully reconstruct the activity distribution when the attenuation map is misaligned with the PET data, but converges slowly due to the significant cross-talk in the likelihood. In this paper, we utilize time-of-flight PET for JRM and demonstrate that the convergence speed is significantly improved compared to JRM with conventional PET data.
Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F; Thielemans, Kris
2016-02-01
This work is an extension of our recent work on joint activity reconstruction/motion estimation (JRM) from positron emission tomography (PET) data. We performed JRM by maximization of the penalized log-likelihood in which the probabilistic model assumes that the same motion field affects both the activity distribution and the attenuation map. Our previous results showed that JRM can successfully reconstruct the activity distribution when the attenuation map is misaligned with the PET data, but converges slowly due to the significant cross-talk in the likelihood. In this paper, we utilize time-of-flight PET for JRM and demonstrate that the convergence speed is significantly improved compared to JRM with conventional PET data.
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. PMID:26079756
Aydin, Zeynep; Marcussen, Thomas; Ertekin, Alaattin Selcuk; Oxelman, Bengt
2014-01-01
Coalescent-based inference of phylogenetic relationships among species takes into account gene tree incongruence due to incomplete lineage sorting, but for such methods to make sense species have to be correctly delimited. Because alternative assignments of individuals to species result in different parametric models, model selection methods can be applied to optimise model of species classification. In a Bayesian framework, Bayes factors (BF), based on marginal likelihood estimates, can be used to test a range of possible classifications for the group under study. Here, we explore BF and the Akaike Information Criterion (AIC) to discriminate between different species classifications in the flowering plant lineage Silene sect. Cryptoneurae (Caryophyllaceae). We estimated marginal likelihoods for different species classification models via the Path Sampling (PS), Stepping Stone sampling (SS), and Harmonic Mean Estimator (HME) methods implemented in BEAST. To select among alternative species classification models a posterior simulation-based analog of the AIC through Markov chain Monte Carlo analysis (AICM) was also performed. The results are compared to outcomes from the software BP&P. Our results agree with another recent study that marginal likelihood estimates from PS and SS methods are useful for comparing different species classifications, and strongly support the recognition of the newly described species S. ertekinii. PMID:25216034
Robust maximum likelihood estimation for stochastic state space model with observation outliers
NASA Astrophysics Data System (ADS)
AlMutawa, J.
2016-08-01
The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.
F-8C adaptive flight control extensions. [for maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Stein, G.; Hartmann, G. L.
1977-01-01
An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Arvind; Dorai, Kavita
2016-09-01
Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
NASA Astrophysics Data System (ADS)
Rizzo, R. E.; Healy, D.; De Siena, L.
2015-12-01
The success of any model prediction is largely dependent on the accuracy with which its parameters are known. In characterising fracture networks in naturally fractured rocks, the main issues are related with the difficulties in accurately up- and down-scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (fracture lengths, apertures, orientations and densities) represents a fundamental step which can aid the estimation of permeability and fluid flow, which are of primary importance in a number of contexts ranging from hydrocarbon production in fractured reservoirs and reservoir stimulation by hydrofracturing, to geothermal energy extraction and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. This work focuses on linking fracture data collected directly from outcrops to permeability estimation and fracture network modelling. Outcrop studies can supplement the limited data inherent to natural fractured systems in the subsurface. The study area is a highly fractured upper Miocene biosiliceous mudstone formation cropping out along the coastline north of Santa Cruz (California, USA). These unique outcrops exposes a recently active bitumen-bearing formation representing a geological analogue of a fractured top seal. In order to validate field observations as useful analogues of subsurface reservoirs, we describe a methodology of statistical analysis for more accurate probability distribution of fracture attributes, using Maximum Likelihood Estimators. These procedures aim to understand whether the average permeability of a fracture network can be predicted reducing its uncertainties, and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Cohn, T.A.
2005-01-01
This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored-data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet-Crame??r-Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real-time water quality monitoring.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function
NASA Astrophysics Data System (ADS)
LIU, Liang; WEI, Ping; LIAO, Hong Shu
Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.
A comparison of minimum distance and maximum likelihood techniques for proportion estimation
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.
1982-01-01
The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.
A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.
Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,
2014-09-01
In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.
User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1980-01-01
A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.
Maximum-likelihood Estimation of Planetary Lithospheric Rigidity from Gravity and Topography
NASA Astrophysics Data System (ADS)
Lewis, K. W.; Eggers, G. L.; Simons, F. J.; Olhede, S. C.
2014-12-01
Gravity and surface topography remain among the best available tools with which to study the lithospheric structure of planetary bodies. Numerous techniques have been developed to quantify the relationship between these fields in both the spatial and spectral domains, to constrain geophysical parameters of interest. Simons and Olhede (2013) describe a new technique based on maximum-likelihood estimation of lithospheric parameters including flexural rigidity, subsurface-surface loading ratio, and the correlation of these loads. We report on the first applications of this technique to planetary bodies including Venus, Mars, and the Earth. We compare results using the maximum-likelihood technique to previous studies using admittance and coherence-based techniques. While various methods of evaluating the relationship of gravity and topography fields have distinct advantages, we demonstrate the specific benefits of the Simons and Olhede technique, which yields unbiased, minimum variance estimates of parameters, together with their covariance. Given the unavoidable problems of incompletely sensed gravity fields, spectral artifacts of data interpolation, downward continuation, and spatial localization, we prescribe a recipe for application of this method to real-world data sets. In the specific case of Venus, we discuss the results of global mapped inversion of an isotropic Matérn covariance model of its topography. We interpret and identify, via statistical testing, regions that require abandoning the null-hypothesis of isotropic Gaussianity, an assumption of the maximum-likelihood technique.
Santos, James D; Dorgam, Diana
2016-09-01
There are several arthropods that can transmit disease to humans. To make inferences about the rate of infection of these arthropods, it is common to collect a large sample of vectors, divide them into groups (called pools), and apply a test to detect infection. This paper presents an approximate likelihood point estimator to rate of infection for pools of different sizes, when the variability of these sizes is small and the infection rate is low. The performance of this estimator was evaluated in four simulated scenarios, created from real experiments selected in the literature. The new estimator performed well in three of these scenarios. As expected, the new estimator performed poorly in the scenario with great variability in the size of the pools for some values of the parameter space. PMID:27159117
On the use of maximum likelihood estimation for the assembly of Space Station Freedom
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr.; Ramakrishnan, Jayant
1991-01-01
Distributed parameter models of the Solar Array Flight Experiment, the Mini-MAST truss, and Space Station Freedom assembly are discussed. The distributed parameter approach takes advantage of (1) the relatively small number of model parameters associated with partial differential equation models of structural dynamics, (2) maximum-likelihood estimation using both prelaunch and on-orbit test data, (3) the inclusion of control system dynamics in the same equations, and (4) the incremental growth of the structural configurations. Maximum-likelihood parameter estimates for distributed parameter models were based on static compliance test results and frequency response measurements. Because the Space Station Freedom does not yet exist, the NASA Mini-MAST truss was used to test the procedure of modeling and parameter estimation. The resulting distributed parameter model of the Mini-MAST truss successfully demonstrated the approach taken. The computer program PDEMOD enables any configuration that can be represented by a network of flexible beam elements and rigid bodies to be remodeled.
Pascazio, Vito; Schirinzi, Gilda
2002-01-01
In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities. PMID:18249716
An inconsistency in the standard maximum likelihood estimation of bulk flows
Nusser, Adi
2014-11-01
Maximum likelihood estimation of the bulk flow from radial peculiar motions of galaxies generally assumes a constant velocity field inside the survey volume. This assumption is inconsistent with the definition of bulk flow as the average of the peculiar velocity field over the relevant volume. This follows from a straightforward mathematical relation between the bulk flow of a sphere and the velocity potential on its surface. This inconsistency also exists for ideal data with exact radial velocities and full spatial coverage. Based on the same relation, we propose a simple modification to correct for this inconsistency.
Adaptive Partial Response Maximum Likelihood Detection with Tilt Estimation Using Sync Pattern
NASA Astrophysics Data System (ADS)
Lee, Kyusuk; Lee, Joohyun; Lee, Jaejin
2006-02-01
We propose an improved detection method that concurrently adjusts the coefficients of equalizer and reference branch values in Viterbi detector. For the estimation of asymmetric channel characteristics, we exploit sync patterns in each data frame. Because of using the read-only memory (ROM) table to renew the coefficients of equalizer and reference values of branches, the complexity of the hardware is reduced. The proposed partial response maximum likelihood (PRML) detector has been designed and verified by VerilogHDL and synthesized by Synopsys Design Compiler with Hynix 0.35 μm standard cell library.
NASA Technical Reports Server (NTRS)
Howell, Leonard W.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
The maximum likelihood procedure is developed for estimating the three spectral parameters of an assumed broken power law energy spectrum from simulated detector responses and their statistical properties investigated. The estimation procedure is then generalized for application to real cosmic-ray data. To illustrate the procedure and its utility, analytical methods were developed in conjunction with a Monte Carlo simulation to explore the combination of the expected cosmic-ray environment with a generic space-based detector and its planned life cycle, allowing us to explore various detector features and their subsequent influence on estimating the spectral parameters. This study permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.
Maximum likelihood estimation for semiparametric transformation models with interval-censored data
Zeng, Donglin; Mao, Lu; Lin, D. Y.
2016-01-01
Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656
2-D impulse noise suppression by recursive gaussian maximum likelihood estimation.
Chen, Yang; Yang, Jian; Shu, Huazhong; Shi, Luyao; Wu, Jiasong; Luo, Limin; Coatrieux, Jean-Louis; Toumoulin, Christine
2014-01-01
An effective approach termed Recursive Gaussian Maximum Likelihood Estimation (RGMLE) is developed in this paper to suppress 2-D impulse noise. And two algorithms termed RGMLE-C and RGMLE-CS are derived by using spatially-adaptive variances, which are respectively estimated based on certainty and joint certainty & similarity information. To give reliable implementation of RGMLE-C and RGMLE-CS algorithms, a novel recursion stopping strategy is proposed by evaluating the estimation error of uncorrupted pixels. Numerical experiments on different noise densities show that the proposed two algorithms can lead to significantly better results than some typical median type filters. Efficient implementation is also realized via GPU (Graphic Processing Unit)-based parallelization techniques.
Chan, Aaron C.; Srinivasan, Vivek J.
2013-01-01
In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044
A maximum likelihood approach to estimating articulator positions from speech acoustics
Hogden, J.
1996-09-23
This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.
Radmacher, M D; Kepler, T B
2001-03-01
The germinal center reaction (GCR) of vertebrate immunity provides a remarkable example of evolutionary succession, in which an advantageous phenotype arises as a spontaneous mutation from the parental type and eventually displaces the parental type altogether. In the case of the immune response to the hapten (4-hydroxy-3-nitrophenyl)acetyl (NP), as with several other designed immunogens, the process is dominated by a single key mutation, which greatly simplifies the modeling of and analysis of data. We developed a two-stage model of this process in which the primary stage represents the appearance and establishment of the mutant population as a stochastic process while the second stage represents the growth and dominance of the clone as a deterministic process, conditional on its time of establishment from stage one. We applied this model to the analysis of population samples from several germinal center (GC) reactions and used maximum-likelihood methods to estimate the waiting times to arrival and to dominance of the mutant clone. We determined the sampling properties of the maximum-likelihood estimates using Monte Carlo methods and compared them to their asymptotic distributions. The methods we present here are well-suited for use in the analysis of other systems, such as tumor growth and the experimental evolution of bacteria.
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
Gutenberg-Richter b-value maximum likelihood estimation and sample size
NASA Astrophysics Data System (ADS)
Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.
2016-06-01
The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.
Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera
NASA Technical Reports Server (NTRS)
Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.
1987-01-01
A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.
NASA Astrophysics Data System (ADS)
Jarmołowski, Wojciech
2015-06-01
The article describes the estimation of a priori error associated with heterogeneous, non-correlated noise within one dataset. The errors are estimated by restricted maximum likelihood (REML). The solution is composed of a cross-validation technique named leave-one-out (LOO) and REML estimation of a priori noise for different groups obtained by LOO. A numerical test is the main part of this case study and it presents two options. In the first one, the whole data is split into two subsets using LOO, by finding potentially outlying data. Then a priori errors are estimated in groups for the better and worse subset, where the latter includes the mentioned outlying data. The second option was to select data from the neighborhood of each point and estimate two a priori errors by REML, one for the selected point and one for the surrounding group of data. Both ideas have been validated with the use of LOO performed only in points of the better subset from the first kind of test. The use of homogeneous noise in the two example sets leads to LOO standard deviations equal 1.83 and 1.54 mGal, respectively. The group estimation generates only small improvement at the level of 0.1 mGal, which can also be reached after the removal of worse points. The pointwise REML solution, however, provides LOO standard deviations that are at least 20 % smaller than statistics from the homogeneous noise application.
Langlois, Dominic; Cousineau, Denis; Thivierge, J P
2014-01-01
The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.
NASA Astrophysics Data System (ADS)
Langlois, Dominic; Cousineau, Denis; Thivierge, J. P.
2014-01-01
The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation
NASA Technical Reports Server (NTRS)
Maine, R. E.
1981-01-01
The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.
NASA Astrophysics Data System (ADS)
Chou, Heng-Chih; Wang, David
2007-11-01
We investigate the performance of a default risk model based on the barrier option framework with maximum likelihood estimation. We provide empirical validation of the model by showing that implied default barriers are statistically significant for a sample of construction firms in Taiwan over the period 1994-2004. We find that our model dominates the commonly adopted models, Merton model, Z-score model and ZETA model. Moreover, we test the n-year-ahead prediction performance of the model and find evidence that the prediction accuracy of the model improves as the forecast horizon decreases. Finally, we assess the effect of estimated default risk on equity returns and find that default risk is able to explain equity returns and that default risk is a variable worth considering in asset-pricing tests, above and beyond size and book-to-market.
Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information
NASA Technical Reports Server (NTRS)
Howell, L. W., Jr.
2003-01-01
A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.
Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs
SHOJAIE, ALI; MICHAILIDIS, GEORGE
2010-01-01
Summary Directed acyclic graphs are commonly used to represent causal relationships among random variables in graphical models. Applications of these models arise in the study of physical and biological systems where directed edges between nodes represent the influence of components of the system on each other. Estimation of directed graphs from observational data is computationally NP-hard. In addition, directed graphs with the same structure may be indistinguishable based on observations alone. When the nodes exhibit a natural ordering, the problem of estimating directed graphs reduces to the problem of estimating the structure of the network. In this paper, we propose an efficient penalized likelihood method for estimation of the adjacency matrix of directed acyclic graphs, when variables inherit a natural ordering. We study variable selection consistency of lasso and adaptive lasso penalties in high-dimensional sparse settings, and propose an error-based choice for selecting the tuning parameter. We show that although the lasso is only variable selection consistent under stringent conditions, the adaptive lasso can consistently estimate the true graph under the usual regularity assumptions. PMID:22434937
Nikoloulopoulos, Aristidis K
2016-06-30
The method of generalized estimating equations (GEE) is popular in the biostatistics literature for analyzing longitudinal binary and count data. It assumes a generalized linear model for the outcome variable, and a working correlation among repeated measurements. In this paper, we introduce a viable competitor: the weighted scores method for generalized linear model margins. We weight the univariate score equations using a working discretized multivariate normal model that is a proper multivariate model. Because the weighted scores method is a parametric method based on likelihood, we propose composite likelihood information criteria as an intermediate step for model selection. The same criteria can be used for both correlation structure and variable selection. Simulations studies and the application example show that our method outperforms other existing model selection methods in GEE. From the example, it can be seen that our methods not only improve on GEE in terms of interpretability and efficiency but also can change the inferential conclusions with respect to GEE. Copyright © 2016 John Wiley & Sons, Ltd.
Stepanyuk, Andrey; Borisyuk, Anya; Belan, Pavel
2014-01-01
Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter, and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment. PMID:25324721
NASA Astrophysics Data System (ADS)
Saatci, Esra; Akan, Aydin
2010-12-01
We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.
Stepanyuk, Andrey; Borisyuk, Anya; Belan, Pavel
2014-01-01
Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter, and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment. PMID:25324721
A method for modeling bias in a person's estimates of likelihoods of events
NASA Technical Reports Server (NTRS)
Nygren, Thomas E.; Morera, Osvaldo
1988-01-01
It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.
Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information
NASA Technical Reports Server (NTRS)
Howell, L. W.
2002-01-01
A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from
Maximum-Likelihood Tree Estimation Using Codon Substitution Models with Multiple Partitions
Zoller, Stefan; Boskova, Veronika; Anisimova, Maria
2015-01-01
Many protein sequences have distinct domains that evolve with different rates, different selective pressures, or may differ in codon bias. Instead of modeling these differences by more and more complex models of molecular evolution, we present a multipartition approach that allows maximum-likelihood phylogeny inference using different codon models at predefined partitions in the data. Partition models can, but do not have to, share free parameters in the estimation process. We test this approach with simulated data as well as in a phylogenetic study of the origin of the leucin-rich repeat regions in the type III effector proteins of the pythopathogenic bacteria Ralstonia solanacearum. Our study does not only show that a simple two-partition model resolves the phylogeny better than a one-partition model but also gives more evidence supporting the hypothesis of lateral gene transfer events between the bacterial pathogens and its eukaryotic hosts. PMID:25911229
Real-time Data Acquisition and Maximum-Likelihood Estimation for Gamma Cameras
Furenlid, L.R.; Hesterman, J.Y.; Barrett, H.H.
2015-01-01
We have developed modular gamma-ray cameras for biomedical imaging that acquire data with a raw list-mode acquisition architecture. All observations associated with a gamma-ray event, such as photomultiplier (PMT) signals and time, are assembled into an event packet and added to an ordered list of event entries that comprise the acquired data. In this work we present the design of the data-acquisition system, and discuss algorithms for a specialized computing engine to reside in the data path between the front and back ends of each camera and carry out maximum-likelihood position and energy estimations in real time while data was being acquired.. PMID:27066595
A new maximum-likelihood change estimator for two-pass SAR coherent change detection
Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; Simonson, Katherine Mary
2016-01-11
In past research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less
NASA Astrophysics Data System (ADS)
Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca
2016-04-01
The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed
Two-Locus Likelihoods Under Variable Population Size and Fine-Scale Recombination Rate Estimation.
Kamm, John A; Spence, Jeffrey P; Chan, Jeffrey; Song, Yun S
2016-07-01
Two-locus sampling probabilities have played a central role in devising an efficient composite-likelihood method for estimating fine-scale recombination rates. Due to mathematical and computational challenges, these sampling probabilities are typically computed under the unrealistic assumption of a constant population size, and simulation studies have shown that resulting recombination rate estimates can be severely biased in certain cases of historical population size changes. To alleviate this problem, we develop here new methods to compute the sampling probability for variable population size functions that are piecewise constant. Our main theoretical result, implemented in a new software package called LDpop, is a novel formula for the sampling probability that can be evaluated by numerically exponentiating a large but sparse matrix. This formula can handle moderate sample sizes ([Formula: see text]) and demographic size histories with a large number of epochs ([Formula: see text]). In addition, LDpop implements an approximate formula for the sampling probability that is reasonably accurate and scales to hundreds in sample size ([Formula: see text]). Finally, LDpop includes an importance sampler for the posterior distribution of two-locus genealogies, based on a new result for the optimal proposal distribution in the variable-size setting. Using our methods, we study how a sharp population bottleneck followed by rapid growth affects the correlation between partially linked sites. Then, through an extensive simulation study, we show that accounting for population size changes under such a demographic model leads to substantial improvements in fine-scale recombination rate estimation. PMID:27182948
NASA Astrophysics Data System (ADS)
Baratti, E.; Montanari, A.; Castellarin, A.; Salinas, J. L.; Viglione, A.; Blöschl, G.
2012-04-01
Flood frequency analysis is often used by practitioners to support the design of river engineering works, flood miti- gation procedures and civil protection strategies. It is often carried out at annual time scale, by fitting observations of annual maximum peak flows. However, in many cases one is also interested in inferring the flood frequency distribution for given intra-annual periods, for instance when one needs to estimate the risk of flood in different seasons. Such information is needed, for instance, when planning the schedule of river engineering works whose building area is in close proximity to the river bed for several months. A key issue in seasonal flood frequency analysis is to ensure the compatibility between intra-annual and annual flood probability distributions. We propose an approach to jointly estimate the parameters of seasonal and annual probability distribution of floods. The approach is based on the preliminary identification of an optimal number of seasons within the year,which is carried out by analysing the timing of flood flows. Then, parameters of intra-annual and annual flood distributions are jointly estimated by using (a) an approximate optimisation technique and (b) a formal maximum likelihood approach. The proposed methodology is applied to some case studies for which extended hydrological information is available at annual and seasonal scale.
Two-Locus Likelihoods Under Variable Population Size and Fine-Scale Recombination Rate Estimation.
Kamm, John A; Spence, Jeffrey P; Chan, Jeffrey; Song, Yun S
2016-07-01
Two-locus sampling probabilities have played a central role in devising an efficient composite-likelihood method for estimating fine-scale recombination rates. Due to mathematical and computational challenges, these sampling probabilities are typically computed under the unrealistic assumption of a constant population size, and simulation studies have shown that resulting recombination rate estimates can be severely biased in certain cases of historical population size changes. To alleviate this problem, we develop here new methods to compute the sampling probability for variable population size functions that are piecewise constant. Our main theoretical result, implemented in a new software package called LDpop, is a novel formula for the sampling probability that can be evaluated by numerically exponentiating a large but sparse matrix. This formula can handle moderate sample sizes ([Formula: see text]) and demographic size histories with a large number of epochs ([Formula: see text]). In addition, LDpop implements an approximate formula for the sampling probability that is reasonably accurate and scales to hundreds in sample size ([Formula: see text]). Finally, LDpop includes an importance sampler for the posterior distribution of two-locus genealogies, based on a new result for the optimal proposal distribution in the variable-size setting. Using our methods, we study how a sharp population bottleneck followed by rapid growth affects the correlation between partially linked sites. Then, through an extensive simulation study, we show that accounting for population size changes under such a demographic model leads to substantial improvements in fine-scale recombination rate estimation.
Bromaghin, Jeffrey; Gates, Kenneth S.; Palmer, Douglas E.
2010-01-01
Many fisheries for Pacific salmon Oncorhynchus spp. are actively managed to meet escapement goal objectives. In fisheries where the demand for surplus production is high, an extensive assessment program is needed to achieve the opposing objectives of allowing adequate escapement and fully exploiting the available surplus. Knowledge of abundance is a critical element of such assessment programs. Abundance estimation using mark—recapture experiments in combination with telemetry has become common in recent years, particularly within Alaskan river systems. Fish are typically captured and marked in the lower river while migrating in aggregations of individuals from multiple populations. Recapture data are obtained using telemetry receivers that are co-located with abundance assessment projects near spawning areas, which provide large sample sizes and information on population-specific mark rates. When recapture data are obtained from multiple populations, unequal mark rates may reflect a violation of the assumption of homogeneous capture probabilities. A common analytical strategy is to test the hypothesis that mark rates are homogeneous and combine all recapture data if the test is not significant. However, mark rates are often low, and a test of homogeneity may lack sufficient power to detect meaningful differences among populations. In addition, differences among mark rates may provide information that could be exploited during parameter estimation. We present a temporally stratified mark—recapture model that permits capture probabilities and migratory timing through the capture area to vary among strata. Abundance information obtained from a subset of populations after the populations have segregated for spawning is jointly modeled with telemetry distribution data by use of a likelihood function. Maximization of the likelihood produces estimates of the abundance and timing of individual populations migrating through the capture area, thus yielding
Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density
Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A.
2009-01-01
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f0 = exp ϕ0 where ϕ0 is a concave function on ℝ. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log–concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, ∞) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of Hk, the “lower invelope” of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of ϕ0 = log f0 at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f0) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values. PMID:19881896
Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.
2015-01-01
While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated
The Likelihood Function and Likelihood Statistics
NASA Astrophysics Data System (ADS)
Robinson, Edward L.
2016-01-01
The likelihood function is a necessary component of Bayesian statistics but not of frequentist statistics. The likelihood function can, however, serve as the foundation for an attractive variant of frequentist statistics sometimes called likelihood statistics. We will first discuss the definition and meaning of the likelihood function, giving some examples of its use and abuse - most notably in the so-called prosecutor's fallacy. Maximum likelihood estimation is the aspect of likelihood statistics familiar to most people. When data points are known to have Gaussian probability distributions, maximum likelihood parameter estimation leads directly to least-squares estimation. When the data points have non-Gaussian distributions, least-squares estimation is no longer appropriate. We will show how the maximum likelihood principle leads to logical alternatives to least squares estimation for non-Gaussian distributions, taking the Poisson distribution as an example.The likelihood ratio is the ratio of the likelihoods of, for example, two hypotheses or two parameters. Likelihood ratios can be treated much like un-normalized probability distributions, greatly extending the applicability and utility of likelihood statistics. Likelihood ratios are prone to the same complexities that afflict posterior probability distributions in Bayesian statistics. We will show how meaningful information can be extracted from likelihood ratios by the Laplace approximation, by marginalizing, or by Markov chain Monte Carlo sampling.
NASA Technical Reports Server (NTRS)
Howell, Leonard W.
2002-01-01
The method of Maximum Likelihood (ML) is used to estimate the spectral parameters of an assumed broken power law energy spectrum from simulated detector responses. This methodology, which requires the complete specificity of all cosmic-ray detector design parameters, is shown to provide approximately unbiased, minimum variance, and normally distributed spectra information for events detected by an instrument having a wide range of commonly used detector response functions. The ML procedure, coupled with the simulated performance of a proposed space-based detector and its planned life cycle, has proved to be of significant value in the design phase of a new science instrument. The procedure helped make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope. This ML methodology is then generalized to estimate broken power law spectral parameters from real cosmic-ray data sets.
Maximum penalized likelihood estimation in semiparametric mark-recapture-recovery models.
Michelot, Théo; Langrock, Roland; Kneib, Thomas; King, Ruth
2016-01-01
We discuss the semiparametric modeling of mark-recapture-recovery data where the temporal and/or individual variation of model parameters is explained via covariates. Typically, in such analyses a fixed (or mixed) effects parametric model is specified for the relationship between the model parameters and the covariates of interest. In this paper, we discuss the modeling of the relationship via the use of penalized splines, to allow for considerably more flexible functional forms. Corresponding models can be fitted via numerical maximum penalized likelihood estimation, employing cross-validation to choose the smoothing parameters in a data-driven way. Our contribution builds on and extends the existing literature, providing a unified inferential framework for semiparametric mark-recapture-recovery models for open populations, where the interest typically lies in the estimation of survival probabilities. The approach is applied to two real datasets, corresponding to gray herons (Ardea cinerea), where we model the survival probability as a function of environmental condition (a time-varying global covariate), and Soay sheep (Ovis aries), where we model the survival probability as a function of individual weight (a time-varying individual-specific covariate). The proposed semiparametric approach is compared to a standard parametric (logistic) regression and new interesting underlying dynamics are observed in both cases.
List-mode likelihood: EM algorithm and image quality estimation demonstrated on 2-D PET.
Parra, L; Barrett, H H
1998-04-01
Using a theory of list-mode maximum-likelihood (ML) source reconstruction presented recently by Barrett et al., this paper formulates a corresponding expectation-maximization (EM) algorithm, as well as a method for estimating noise properties at the ML estimate. List-mode ML is of interest in cases where the dimensionality of the measurement space impedes a binning of the measurement data. It can be advantageous in cases where a better forward model can be obtained by including more measurement coordinates provided by a given detector. Different figures of merit for the detector performance can be computed from the Fisher information matrix (FIM). This paper uses the observed FIM, which requires a single data set, thus, avoiding costly ensemble statistics. The proposed techniques are demonstrated for an idealized two-dimensional (2-D) positron emission tomography (PET) [2-D PET] detector. We compute from simulation data the improved image quality obtained by including the time of flight of the coincident quanta.
Maximum-likelihood q-estimator uncovers the role of potassium at neuromuscular junctions.
da Silva, A J; Trindade, M A S; Santos, D O C; Lima, R F
2016-02-01
Recently, we demonstrated the existence of nonextensive behavior in neuromuscular transmission (da Silva et al. in Phys Rev E 84:041925, 2011). In this letter, we first obtain a maximum-likelihood q-estimator to calculate the scale factor ([Formula: see text]) and the q-index of q-Gaussian distributions. Next, we use the indexes to analyze spontaneous miniature end plate potentials in electrophysiological recordings from neuromuscular junctions. These calculations were performed assuming both normal and high extracellular potassium concentrations [Formula: see text]. This protocol was used to test the validity of Tsallis statistics under electrophysiological conditions closely resembling physiological stimuli. The analysis shows that q-indexes are distinct depending on the extracellular potassium concentration. Our letter provides a general way to obtain the best estimate of parameters from a q-Gaussian distribution function. It also expands the validity of Tsallis statistics in realistic physiological stimulus conditions. In addition, we discuss the physical and physiological implications of these findings.
NASA Astrophysics Data System (ADS)
Zhao, Xiang; Lin, Jiming
2016-04-01
Image sensor-based visible light positioning can be applied not only to indoor environments but also to outdoor environments. To determine the performance bounds of the positioning accuracy from the view of statistical optimization for an outdoor image sensor-based visible light positioning system, we analyze and derive the maximum likelihood estimation and corresponding Cramér-Rao lower bounds of vehicle position, under the condition that the observation values of the light-emitting diode (LED) imaging points are affected by white Gaussian noise. For typical parameters of an LED traffic light and in-vehicle camera image sensor, simulation results show that accurate estimates are available, with positioning error generally less than 0.1 m at a communication distance of 30 m between the LED array transmitter and the camera receiver. With the communication distance being constant, the positioning accuracy depends on the number of LEDs used, the focal length of the lens, the pixel size, and the frame rate of the camera receiver.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Two-group time-to-event continual reassessment method using likelihood estimation.
Salter, Amber; O'Quigley, John; Cutter, Gary R; Aban, Inmaculada B
2015-11-01
The presence of patient heterogeneity in dose finding studies is inherent (i.e. groups with different maximum tolerated doses). When this type of heterogeneity is not accounted for in the trial design, subjects may be exposed to toxic or suboptimal doses. Options to handle patient heterogeneity include conducting separate trials or splitting the trial into arms. However, cost and/or lack of resources may limit the feasibility of these options. If information is shared between the groups, then both of these options do not benefit from using the shared information. Extending current dose finding designs to handle patient heterogeneity maximizes the utility of existing methods within a single trial. We propose a modification to the time-to-event continual reassessment method to accommodate two groups using a two-parameter model and maximum likelihood estimation. The operating characteristics of the design are investigated through simulations under different scenarios including the scenario where one conducts two separate trials, one for each group, using the one-sample time-to-event continual reassessment method.
Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm
Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.
2010-01-01
A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155
Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.
Yin, Guosheng; Ma, Yanyuan
2013-01-01
The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.
NASA Technical Reports Server (NTRS)
Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)
2002-01-01
The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.
Gang, Grace J.; Stayman, J. Webster; Zbijewski, Wojciech; Siewerdsen, Jeffrey H.
2014-08-15
Purpose: Nonstationarity is an important aspect of imaging performance in CT and cone-beam CT (CBCT), especially for systems employing iterative reconstruction. This work presents a theoretical framework for both filtered-backprojection (FBP) and penalized-likelihood (PL) reconstruction that includes explicit descriptions of nonstationary noise, spatial resolution, and task-based detectability index. Potential utility of the model was demonstrated in the optimal selection of regularization parameters in PL reconstruction. Methods: Analytical models for local modulation transfer function (MTF) and noise-power spectrum (NPS) were investigated for both FBP and PL reconstruction, including explicit dependence on the object and spatial location. For FBP, a cascaded systems analysis framework was adapted to account for nonstationarity by separately calculating fluence and system gains for each ray passing through any given voxel. For PL, the point-spread function and covariance were derived using the implicit function theorem and first-order Taylor expansion according toFessler [“Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography,” IEEE Trans. Image Process. 5(3), 493–506 (1996)]. Detectability index was calculated for a variety of simple tasks. The model for PL was used in selecting the regularization strength parameter to optimize task-based performance, with both a constant and a spatially varying regularization map. Results: Theoretical models of FBP and PL were validated in 2D simulated fan-beam data and found to yield accurate predictions of local MTF and NPS as a function of the object and the spatial location. The NPS for both FBP and PL exhibit similar anisotropic nature depending on the pathlength (and therefore, the object and spatial location within the object) traversed by each ray, with the PL NPS experiencing greater smoothing along directions with higher noise. The MTF of FBP
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
ERIC Educational Resources Information Center
Rijmen, Frank
2009-01-01
Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…
Gruber, Susan; Radice, Rosalba; Grieve, Richard; Sekhon, Jasjeet S
2014-01-01
Statistical approaches for estimating treatment effectiveness commonly model the endpoint, or the propensity score, using parametric regressions such as generalised linear models. Misspecification of these models can lead to biased parameter estimates. We compare two approaches that combine the propensity score and the endpoint regression, and can make weaker modelling assumptions, by using machine learning approaches to estimate the regression function and the propensity score. Targeted maximum likelihood estimation is a double-robust method designed to reduce bias in the estimate of the parameter of interest. Bias-corrected matching reduces bias due to covariate imbalance between matched pairs by using regression predictions. We illustrate the methods in an evaluation of different types of hip prosthesis on the health-related quality of life of patients with osteoarthritis. We undertake a simulation study, grounded in the case study, to compare the relative bias, efficiency and confidence interval coverage of the methods. We consider data generating processes with non-linear functional form relationships, normal and non-normal endpoints. We find that across the circumstances considered, bias-corrected matching generally reported less bias, but higher variance than targeted maximum likelihood estimation. When either targeted maximum likelihood estimation or bias-corrected matching incorporated machine learning, bias was much reduced, compared to using misspecified parametric models. PMID:24525488
Inter-bit prediction based on maximum likelihood estimate for distributed video coding
NASA Astrophysics Data System (ADS)
Klepko, Robert; Wang, Demin; Huchet, Grégory
2010-01-01
Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.
NASA Astrophysics Data System (ADS)
Rivera, Diego; Rivas, Yessica; Godoy, Alex
2015-02-01
Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.
Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.
Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène
2016-07-01
Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific
DeRamus, Thomas P.; Kana, Rajesh K.
2014-01-01
Autism spectrum disorders (ASD) are characterized by impairments in social communication and restrictive, repetitive behaviors. While behavioral symptoms are well-documented, investigations into the neurobiological underpinnings of ASD have not resulted in firm biomarkers. Variability in findings across structural neuroimaging studies has contributed to difficulty in reliably characterizing the brain morphology of individuals with ASD. These inconsistencies may also arise from the heterogeneity of ASD, and wider age-range of participants included in MRI studies and in previous meta-analyses. To address this, the current study used coordinate-based anatomical likelihood estimation (ALE) analysis of 21 voxel-based morphometry (VBM) studies examining high-functioning individuals with ASD, resulting in a meta-analysis of 1055 participants (506 ASD, and 549 typically developing individuals). Results consisted of grey, white, and global differences in cortical matter between the groups. Modeled anatomical maps consisting of concentration, thickness, and volume metrics of grey and white matter revealed clusters suggesting age-related decreases in grey and white matter in parietal and inferior temporal regions of the brain in ASD, and age-related increases in grey matter in frontal and anterior-temporal regions. White matter alterations included fiber tracts thought to play key roles in information processing and sensory integration. Many current theories of pathobiology ASD suggest that the brains of individuals with ASD may have less-functional long-range (anterior-to-posterior) connections. Our findings of decreased cortical matter in parietal–temporal and occipital regions, and thickening in frontal cortices in older adults with ASD may entail altered cortical anatomy, and neurodevelopmental adaptations. PMID:25844306
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.
Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène
2016-07-01
Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific
Wright, April M.; Hillis, David M.
2014-01-01
Despite the introduction of likelihood-based methods for estimating phylogenetic trees from phenotypic data, parsimony remains the most widely-used optimality criterion for building trees from discrete morphological data. However, it has been known for decades that there are regions of solution space in which parsimony is a poor estimator of tree topology. Numerous software implementations of likelihood-based models for the estimation of phylogeny from discrete morphological data exist, especially for the Mk model of discrete character evolution. Here we explore the efficacy of Bayesian estimation of phylogeny, using the Mk model, under conditions that are commonly encountered in paleontological studies. Using simulated data, we describe the relative performances of parsimony and the Mk model under a range of realistic conditions that include common scenarios of missing data and rate heterogeneity. PMID:25279853
Wright, April M; Hillis, David M
2014-01-01
Despite the introduction of likelihood-based methods for estimating phylogenetic trees from phenotypic data, parsimony remains the most widely-used optimality criterion for building trees from discrete morphological data. However, it has been known for decades that there are regions of solution space in which parsimony is a poor estimator of tree topology. Numerous software implementations of likelihood-based models for the estimation of phylogeny from discrete morphological data exist, especially for the Mk model of discrete character evolution. Here we explore the efficacy of Bayesian estimation of phylogeny, using the Mk model, under conditions that are commonly encountered in paleontological studies. Using simulated data, we describe the relative performances of parsimony and the Mk model under a range of realistic conditions that include common scenarios of missing data and rate heterogeneity.
A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood.
Guindon, Stéphane; Gascuel, Olivier
2003-10-01
The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximum- likelihood principle, which clearly satisfies these requirements. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximum-likelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distance-based and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page: http://www.lirmm.fr/w3ifa/MAAS/.
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1984-01-01
A discussion is undertaken concerning the maximum likelihood estimator and the aircraft equations of motion it employs, with attention to the application of the concepts of minimization and estimation to a simple computed aircraft example. Graphic representations are given for the cost functions to help illustrate the minimization process. The basic concepts are then generalized, and estimations obtained from flight data are evaluated. The example considered shows the advantage of low measurement noise, multiple estimates at a given condition, the Cramer-Rao bounds, and the quality of the match between measured and computed data.
Reconstruction of difference in sequential CT studies using penalized likelihood estimation
Pourmorteza, A; Dang, H; Siewerdsen, J H; Stayman, J W
2016-01-01
Characterization of anatomical change and other differences is important in sequential computed tomography (CT) imaging, where a high-fidelity patient-specific prior image is typically present, but is not used, in the reconstruction of subsequent anatomical states. Here, we introduce a penalized likelihood (PL) method called reconstruction of difference (RoD) to directly reconstruct a difference image volume using both the current projection data and the (unregistered) prior image integrated into the forward model for the measurement data. The algorithm utilizes an alternating minimization to find both the registration and reconstruction estimates. This formulation allows direct control over the image properties of the difference image, permitting regularization strategies that inhibit noise and structural differences due to inconsistencies between the prior image and the current data.Additionally, if the change is known to be local, RoD allows local acquisition and reconstruction, as opposed to traditional model-based approaches that require a full support field of view (or other modifications). We compared the performance of RoD to a standard PL algorithm, in simulation studies and using test-bench cone-beam CT data. The performances of local and global RoD approaches were similar, with local RoD providing a significant computational speedup. In comparison across a range of data with differing fidelity, the local RoD approach consistently showed lower error (with respect to a truth image) than PL in both noisy data and sparsely sampled projection scenarios. In a study of the prior image registration performance of RoD, a clinically reasonable capture ranges were demonstrated. Lastly, the registration algorithm had a broad capture range and the error for reconstruction of CT data was 35% and 20% less than filtered back-projection for RoD and PL, respectively. The RoD has potential for delivering high-quality difference images in a range of sequential clinical
Reconstruction of difference in sequential CT studies using penalized likelihood estimation
NASA Astrophysics Data System (ADS)
Pourmorteza, A.; Dang, H.; Siewerdsen, J. H.; Stayman, J. W.
2016-03-01
Characterization of anatomical change and other differences is important in sequential computed tomography (CT) imaging, where a high-fidelity patient-specific prior image is typically present, but is not used, in the reconstruction of subsequent anatomical states. Here, we introduce a penalized likelihood (PL) method called reconstruction of difference (RoD) to directly reconstruct a difference image volume using both the current projection data and the (unregistered) prior image integrated into the forward model for the measurement data. The algorithm utilizes an alternating minimization to find both the registration and reconstruction estimates. This formulation allows direct control over the image properties of the difference image, permitting regularization strategies that inhibit noise and structural differences due to inconsistencies between the prior image and the current data. Additionally, if the change is known to be local, RoD allows local acquisition and reconstruction, as opposed to traditional model-based approaches that require a full support field of view (or other modifications). We compared the performance of RoD to a standard PL algorithm, in simulation studies and using test-bench cone-beam CT data. The performances of local and global RoD approaches were similar, with local RoD providing a significant computational speedup. In comparison across a range of data with differing fidelity, the local RoD approach consistently showed lower error (with respect to a truth image) than PL in both noisy data and sparsely sampled projection scenarios. In a study of the prior image registration performance of RoD, a clinically reasonable capture ranges were demonstrated. Lastly, the registration algorithm had a broad capture range and the error for reconstruction of CT data was 35% and 20% less than filtered back-projection for RoD and PL, respectively. The RoD has potential for delivering high-quality difference images in a range of sequential clinical
Green, Cynthia L; Brownie, Cavell; Boos, Dennis D; Lu, Jye-Chyi; Krucoff, Mitchell W
2016-04-01
We propose a novel likelihood method for analyzing time-to-event data when multiple events and multiple missing data intervals are possible prior to the first observed event for a given subject. This research is motivated by data obtained from a heart monitor used to track the recovery process of subjects experiencing an acute myocardial infarction. The time to first recovery, T1, is defined as the time when the ST-segment deviation first falls below 50% of the previous peak level. Estimation of T1 is complicated by data gaps during monitoring and the possibility that subjects can experience more than one recovery. If gaps occur prior to the first observed event, T, the first observed recovery may not be the subject's first recovery. We propose a parametric gap likelihood function conditional on the gap locations to estimate T1 Standard failure time methods that do not fully utilize the data are compared to the gap likelihood method by analyzing data from an actual study and by simulation. The proposed gap likelihood method is shown to be more efficient and less biased than interval censoring and more efficient than right censoring if data gaps occur early in the monitoring process or are short in duration.
NASA Astrophysics Data System (ADS)
Nezhel'skaya, L. A.
2016-09-01
A flow of physical events (photons, electrons, and other elementary particles) is studied. One of the mathematical models of such flows is the modulated MAP flow of events circulating under conditions of unextendable dead time period. It is assumed that the dead time period is an unknown fixed value. The problem of estimation of the dead time period from observations of arrival times of events is solved by the method of maximum likelihood.
Vallisneri, Michele
2011-11-01
Gravitational-wave astronomers often wish to characterize the expected parameter-estimation accuracy of future observations. The Fisher matrix provides a lower bound on the spread of the maximum-likelihood estimator across noise realizations, as well as the leading-order width of the posterior probability, but it is limited to high signal strengths often not realized in practice. By contrast, Monte Carlo Bayesian inference provides the full posterior for any signal strength, but it is too expensive to repeat for a representative set of noises. Here I describe an efficient semianalytical technique to map the exact sampling distribution of the maximum-likelihood estimator across noise realizations, for any signal strength. This technique can be applied to any estimation problem for signals in additive Gaussian noise. PMID:22181593
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.
Berthier, Pierre; Beaumont, Mark A; Cornuet, Jean-Marie; Luikart, Gordon
2002-01-01
A new genetic estimator of the effective population size (N(e)) is introduced. This likelihood-based (LB) estimator uses two temporally spaced genetic samples of individuals from a population. We compared its performance to that of the classical F-statistic-based N(e) estimator (N(eFk)) by using data from simulated populations with known N(e) and real populations. The new likelihood-based estimator (N(eLB)) showed narrower credible intervals and greater accuracy than (N(eFk)) when genetic drift was strong, but performed only slightly better when genetic drift was relatively weak. When drift was strong (e.g., N(e) = 20 for five generations), as few as approximately 10 loci (heterozygosity of 0.6; samples of 30 individuals) are sufficient to consistently achieve credible intervals with an upper limit <50 using the LB method. In contrast, approximately 20 loci are required for the same precision when using the classical F-statistic approach. The N(eLB) estimator is much improved over the classical method when there are many rare alleles. It will be especially useful in conservation biology because it less often overestimates N(e) than does N(eLB) and thus is less likely to erroneously suggest that a population is large and has a low extinction risk. PMID:11861575
O'Hare, A; Orton, R J; Bessell, P R; Kao, R R
2014-05-22
Fitting models with Bayesian likelihood-based parameter inference is becoming increasingly important in infectious disease epidemiology. Detailed datasets present the opportunity to identify subsets of these data that capture important characteristics of the underlying epidemiology. One such dataset describes the epidemic of bovine tuberculosis (bTB) in British cattle, which is also an important exemplar of a disease with a wildlife reservoir (the Eurasian badger). Here, we evaluate a set of nested dynamic models of bTB transmission, including individual- and herd-level transmission heterogeneity and assuming minimal prior knowledge of the transmission and diagnostic test parameters. We performed a likelihood-based bootstrapping operation on the model to infer parameters based only on the recorded numbers of cattle testing positive for bTB at the start of each herd outbreak considering high- and low-risk areas separately. Models without herd heterogeneity are preferred in both areas though there is some evidence for super-spreading cattle. Similar to previous studies, we found low test sensitivities and high within-herd basic reproduction numbers (R0), suggesting that there may be many unobserved infections in cattle, even though the current testing regime is sufficient to control within-herd epidemics in most cases. Compared with other, more data-heavy approaches, the summary data used in our approach are easily collected, making our approach attractive for other systems.
O'Hare, A.; Orton, R. J.; Bessell, P. R.; Kao, R. R.
2014-01-01
Fitting models with Bayesian likelihood-based parameter inference is becoming increasingly important in infectious disease epidemiology. Detailed datasets present the opportunity to identify subsets of these data that capture important characteristics of the underlying epidemiology. One such dataset describes the epidemic of bovine tuberculosis (bTB) in British cattle, which is also an important exemplar of a disease with a wildlife reservoir (the Eurasian badger). Here, we evaluate a set of nested dynamic models of bTB transmission, including individual- and herd-level transmission heterogeneity and assuming minimal prior knowledge of the transmission and diagnostic test parameters. We performed a likelihood-based bootstrapping operation on the model to infer parameters based only on the recorded numbers of cattle testing positive for bTB at the start of each herd outbreak considering high- and low-risk areas separately. Models without herd heterogeneity are preferred in both areas though there is some evidence for super-spreading cattle. Similar to previous studies, we found low test sensitivities and high within-herd basic reproduction numbers (R0), suggesting that there may be many unobserved infections in cattle, even though the current testing regime is sufficient to control within-herd epidemics in most cases. Compared with other, more data-heavy approaches, the summary data used in our approach are easily collected, making our approach attractive for other systems. PMID:24718762
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Beerli, P; Felsenstein, J
1999-06-01
A new method for the estimation of migration rates and effective population sizes is described. It uses a maximum-likelihood framework based on coalescence theory. The parameters are estimated by Metropolis-Hastings importance sampling. In a two-population model this method estimates four parameters: the effective population size and the immigration rate for each population relative to the mutation rate. Summarizing over loci can be done by assuming either that the mutation rate is the same for all loci or that the mutation rates are gamma distributed among loci but the same for all sites of a locus. The estimates are as good as or better than those from an optimized FST-based measure. The program is available on the World Wide Web at http://evolution.genetics. washington.edu/lamarc.html/.
NASA Astrophysics Data System (ADS)
Price, K.; Purucker, T.; Kraemer, S.; Babendreier, J. E.
2011-12-01
Four nested sub-watersheds (21 to 10100 km^2) of the Neuse River in North Carolina are used to investigate calibration tradeoffs in goodness-of-fit metrics using multiple likelihood methods. Calibration of watershed hydrologic models is commonly achieved by optimizing a single goodness-of-fit metric to characterize simulated versus observed flows (e.g., R^2 and Nash-Sutcliffe Efficiency Coefficient, or NSE). However, each of these objective functions heavily weights a particular aspect of streamflow. For example, NSE and R^2 both emphasize high flows in evaluating simulation fit, while the Modified Nash-Sutcliffe Efficiency Coefficient (MNSE) emphasizes low flows. Other metrics, such as the ratio of the simulated versus observed flow standard deviations (SDR), prioritize overall flow variability. In this comparison, we use informal likelihood methods to investigate the tradeoffs of calibrating streamflow on three standard goodness-of-fit metrics (NSE, MNSE, and SDR), as well as an index metric that equally weights these three objective functions to address a range of flow characteristics. We present a flexible method that allows calibration targets to be determined by modeling goals. In this process, we begin by using Latin Hypercube Sampling (LHS) to reduce the simulations required to explore the full parameter space. The correlation structure of a large suite of goodness-of-fit metrics is explored to select metrics for use in an index function that incorporates a range of flow characteristics while avoiding redundancy. An iterative informal likelihood procedure is used to narrow parameter ranges after each simulation set to areas of the range with the most support from the observed data. A stopping rule is implemented to characterize the overall goodness-of-fit associated with the parameter set for each pass, with the best-fit pass distributions used as the calibrated set for the next simulation set. This process allows a great deal of flexibility. The process is
Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H
2005-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.
Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H
2005-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884
Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.
2010-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884
Nickl-Jockschat, Thomas; Habel, Ute; Michel, Tanja Maria; Manning, Janessa; Laird, Angela R; Fox, Peter T; Schneider, Frank; Eickhoff, Simon B
2012-06-01
Autism spectrum disorders (ASD) are pervasive developmental disorders with characteristic core symptoms such as impairments in social interaction, deviance in communication, repetitive and stereotyped behavior, and impaired motor skills. Anomalies of brain structure have repeatedly been hypothesized to play a major role in the etiopathogenesis of the disorder. Our objective was to perform unbiased meta-analysis on brain structure changes as reported in the current ASD literature. We thus conducted a comprehensive search for morphometric studies by Pubmed query and literature review. We used a revised version of the activation likelihood estimation (ALE) approach for coordinate-based meta-analysis of neuroimaging results. Probabilistic cytoarchitectonic maps were applied to compare the localization of the obtained significant effects to histological areas. Each of the significant ALE clusters was analyzed separately for age effects on gray and white matter density changes. We found six significant clusters of convergence indicating disturbances in the brain structure of ASD patients, including the lateral occipital lobe, the pericentral region, the medial temporal lobe, the basal ganglia, and proximate to the right parietal operculum. Our study provides the first quantitative summary of brain structure changes reported in literature on autism spectrum disorders. In contrast to the rather small sample sizes of the original studies, our meta-analysis encompasses data of 277 ASD patients and 303 healthy controls. This unbiased summary provided evidence for consistent structural abnormalities in spite of heterogeneous diagnostic criteria and voxel-based morphometry (VBM) methodology, but also hinted at a dependency of VBM findings on the age of the patients.
Woody, Michael S; Lewis, John H; Greenberg, Michael J; Goldman, Yale E; Ostap, E Michael
2016-07-26
We present MEMLET (MATLAB-enabled maximum-likelihood estimation tool), a simple-to-use and powerful program for utilizing maximum-likelihood estimation (MLE) for parameter estimation from data produced by single-molecule and other biophysical experiments. The program is written in MATLAB and includes a graphical user interface, making it simple to integrate into the existing workflows of many users without requiring programming knowledge. We give a comparison of MLE and other fitting techniques (e.g., histograms and cumulative frequency distributions), showing how MLE often outperforms other fitting methods. The program includes a variety of features. 1) MEMLET fits probability density functions (PDFs) for many common distributions (exponential, multiexponential, Gaussian, etc.), as well as user-specified PDFs without the need for binning. 2) It can take into account experimental limits on the size of the shortest or longest detectable event (i.e., instrument "dead time") when fitting to PDFs. The proper modification of the PDFs occurs automatically in the program and greatly increases the accuracy of fitting the rates and relative amplitudes in multicomponent exponential fits. 3) MEMLET offers model testing (i.e., single-exponential versus double-exponential) using the log-likelihood ratio technique, which shows whether additional fitting parameters are statistically justifiable. 4) Global fitting can be used to fit data sets from multiple experiments to a common model. 5) Confidence intervals can be determined via bootstrapping utilizing parallel computation to increase performance. Easy-to-follow tutorials show how these features can be used. This program packages all of these techniques into a simple-to-use and well-documented interface to increase the accessibility of MLE fitting. PMID:27463130
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
ERIC Educational Resources Information Center
Jones, Douglas H.
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
Lermer, Eva; Streicher, Bernhard; Sachs, Rainer; Raue, Martina; Frey, Dieter
2016-03-01
Recent findings on construal level theory (CLT) suggest that abstract thinking leads to a lower estimated probability of an event occurring compared to concrete thinking. We applied this idea to the risk context and explored the influence of construal level (CL) on the overestimation of small and underestimation of large probabilities for risk estimates concerning a vague target person (Study 1 and Study 3) and personal risk estimates (Study 2). We were specifically interested in whether the often-found overestimation of small probabilities could be reduced with abstract thinking, and the often-found underestimation of large probabilities was reduced with concrete thinking. The results showed that CL influenced risk estimates. In particular, a concrete mindset led to higher risk estimates compared to an abstract mindset for several adverse events, including events with small and large probabilities. This suggests that CL manipulation can indeed be used for improving the accuracy of lay people's estimates of small and large probabilities. Moreover, the results suggest that professional risk managers' risk estimates of common events (thus with a relatively high probability) could be improved by adopting a concrete mindset. However, the abstract manipulation did not lead managers to estimate extremely unlikely events more accurately. Potential reasons for different CL manipulation effects on risk estimates' accuracy between lay people and risk managers are discussed.
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
ERIC Educational Resources Information Center
Savalei, Victoria; Rhemtulla, Mijke
2012-01-01
Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…
Characterization of a maximum-likelihood nonparametric density estimator of kernel type
NASA Technical Reports Server (NTRS)
Geman, S.; Mcclure, D. E.
1982-01-01
Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).
Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S
2015-01-01
Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program. PMID:26610249
Likelihood parameter estimation for calibrating a soil moisture using radar backscatter
Technology Transfer Automated Retrieval System (TEKTRAN)
Assimilating soil moisture information contained in synthetic aperture radar imagery into land surface model predictions can be done using a calibration, or parameter estimation, approach. The presence of speckle, however, necessitates aggregating backscatter measurements over large land areas in or...
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1976-01-01
A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.
Process for estimating likelihood and confidence in post detonation nuclear forensics.
Darby, John L.; Craft, Charles M.
2014-07-01
Technical nuclear forensics (TNF) must provide answers to questions of concern to the broader community, including an estimate of uncertainty. There is significant uncertainty associated with post-detonation TNF. The uncertainty consists of a great deal of epistemic (state of knowledge) as well as aleatory (random) uncertainty, and many of the variables of interest are linguistic (words) and not numeric. We provide a process by which TNF experts can structure their process for answering questions and provide an estimate of uncertainty. The process uses belief and plausibility, fuzzy sets, and approximate reasoning.
The Undiscovered Country: Can We Estimate the Likelihood of Extrasolar Planetary Habitability?
NASA Astrophysics Data System (ADS)
Unterborn, C. T.; Panero, W. R.; Hull, S. D.
2015-12-01
Plate tectonics have operated on Earth for a majority of its lifetime. Tectonics regulates atmospheric carbon and creates a planetary-scale water cycle, and is a primary factor in the Earth being habitable. While the mechanism for initiating tectonics is unknown, as we expand our search for habitable worlds, understanding which planetary compositions produce planets capable of supporting long-term tectonics is of paramount importance. On Earth, this sustentation of tectonics is a function of both its structure and composition. Currently, however, we have no method to measure the interior composition of exoplanets. In our Solar system, though, Solar abundances for refractory elements mirror the Earth's to within ~10%, allowing the adoption of Solar abundances as proxies for Earth's. It is not known, however, whether this mirroring of stellar and terrestrial planet abundances holds true for other star-planet systems without determination of the composition of initial planetesimals via condensation sequence calculations. Currently, all code for ascertaining these sequences are commercially available or closed-source. We present, then, the open-source Arbitrary Composition Condensation Sequence calculator (ArCCoS) for converting the elemental composition of a parent star to that of the planet-building material as well as the extent of oxidation within the planetesimals. These data allow us to constrain the likelihood for one of the main drivers for plate tectonics: the basalt to eclogite transition subducting plates. Unlike basalt, eclogite is denser than the surrounding mantle and thus sinks into the mantle, pulling the overlying slab with it. Without this higher density relative to the mantle, plates stagnate at shallow depths, shutting off plate tectonics. Using the results of ArCCoS as abundance inputs into the MELTS and HeFESTo thermodynamic models, we calculate phase relations for the first basaltic crust and depleted mantle of a terrestrial planet produced from
NASA Astrophysics Data System (ADS)
West, Anthony C. F.; Novakowski, Kent S.; Gazor, Saeed
2006-06-01
We propose a new method to estimate the transmissivities of bedrock fractures from transmissivities measured in intervals of fixed length along a borehole. We define the scale of a fracture set by the inverse of the density of the Poisson point process assumed to represent their locations along the borehole wall, and we assume a lognormal distribution for their transmissivities. The parameters of the latter distribution are estimated by maximizing the likelihood of a left-censored subset of the data where the degree of censorship depends on the scale of the considered fracture set. We applied the method to sets of interval transmissivities simulated by summing random fracture transmissivities drawn from a specified population. We found the estimated distributions compared well to the transmissivity distributions of similarly scaled subsets of the most transmissive fractures from among the specified population. Estimation accuracy was most sensitive to the variance in the transmissivities of the fracture population. Using the proposed method, we estimated the transmissivities of fractures at increasing scale from hydraulic test data collected at a fixed scale in Smithville, Ontario, Canada. This is an important advancement since the resultant curves of transmissivity parameters versus fracture set scale would only previously have been obtainable from hydraulic tests conducted with increasing test interval length and with degrading equipment precision. Finally, on the basis of the properties of the proposed method, we propose guidelines for the design of fixed interval length hydraulic testing programs that require minimal prior knowledge of the rock.
Bounds for Maximum Likelihood Regular and Non-Regular DoA Estimation in K-Distributed Noise
NASA Astrophysics Data System (ADS)
Abramovich, Yuri I.; Besson, Olivier; Johnson, Ben A.
2015-11-01
We consider the problem of estimating the direction of arrival of a signal embedded in $K$-distributed noise, when secondary data which contains noise only are assumed to be available. Based upon a recent formula of the Fisher information matrix (FIM) for complex elliptically distributed data, we provide a simple expression of the FIM with the two data sets framework. In the specific case of $K$-distributed noise, we show that, under certain conditions, the FIM for the deterministic part of the model can be unbounded, while the FIM for the covariance part of the model is always bounded. In the general case of elliptical distributions, we provide a sufficient condition for unboundedness of the FIM. Accurate approximations of the FIM for $K$-distributed noise are also derived when it is bounded. Additionally, the maximum likelihood estimator of the signal DoA and an approximated version are derived, assuming known covariance matrix: the latter is then estimated from secondary data using a conventional regularization technique. When the FIM is unbounded, an analysis of the estimators reveals a rate of convergence much faster than the usual $T^{-1}$. Simulations illustrate the different behaviors of the estimators, depending on the FIM being bounded or not.
NASA Astrophysics Data System (ADS)
Zhang, Yong; Wang, Yulong
2016-04-01
Although decision-aided (DA) maximum likelihood (ML) phase estimation (PE) algorithm has been investigated intensively, block length effect impacts system performance and leads to the increasing of hardware complexity. In this paper, a flexible DA-ML algorithm is proposed in hybrid QPSK/OOK coherent optical wavelength division multiplexed (WDM) systems. We present a general cross phase modulation (XPM) model based on Volterra series transfer function (VSTF) method to describe XPM effects induced by OOK channels at the end of dispersion management (DM) fiber links. Based on our model, the weighted factors obtained from maximum likelihood method are introduced to eliminate the block length effect. We derive the analytical expression of phase error variance for the performance prediction of coherent receiver with the flexible DA-ML algorithm. Bit error ratio (BER) performance is evaluated and compared through both theoretical derivation and Monte Carlo (MC) simulation. The results show that our flexible DA-ML algorithm has significant improvement in performance compared with the conventional DA-ML algorithm as block length is a fixed value. Compared with the conventional DA-ML with optimum block length, our flexible DA-ML can obtain better system performance. It means our flexible DA-ML algorithm is more effective for mitigating phase noise than conventional DA-ML algorithm.
NASA Astrophysics Data System (ADS)
Song, Yanxing; Yang, Jingsong; Cheng, Lina; Liu, Shucong
2014-09-01
An image restoration method based on Poisson-maximum likelihood estimation method (PMLE) for earthquake ruin scene is proposed in this paper. The PMLE algorithm is introduced at first, and automatic acceleration method is used in the algorithm to accelerate the iterative process, then an image of earthquake ruin scene is processed with this image restoration method. The spectral correlation method and PSNR (peak signal-to-noise ratio) are chosen respectively to validate the restoration effect of the method, the simulation results show that iterations in this method will effect the PSNR of the processed image and operation time, and this method can restore image of earthquake ruin scene effectively and has a good practicability.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1990-01-01
A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.
Macbeth, Gilbert M; Broderick, Damien; Ovenden, Jennifer R; Buckworth, Rik C
2011-11-01
Genotypes produced from samples collected non-invasively in harsh field conditions often lack the full complement of data from the selected microsatellite loci. The application to genetic mark-recapture methodology in wildlife species can therefore be prone to misidentifications leading to both 'true non-recaptures' being falsely accepted as recaptures (Type I errors) and 'true recaptures' being undetected (Type II errors). Here we present a new likelihood method that allows every pairwise genotype comparison to be evaluated independently. We apply this method to determine the total number of recaptures by estimating and optimising the balance between Type I errors and Type II errors. We show through simulation that the standard error of recapture estimates can be minimised through our algorithms. Interestingly, the precision of our recapture estimates actually improved when we included individuals with missing genotypes, as this increased the number of pairwise comparisons potentially uncovering more recaptures. Simulations suggest that the method is tolerant to per locus error rates of up to 5% per locus and can theoretically work in datasets with as little as 60% of loci genotyped. Our methods can be implemented in datasets where standard mismatch analyses fail to distinguish recaptures. Finally, we show that by assigning a low Type I error rate to our matching algorithms we can generate a dataset of individuals of known capture histories that is suitable for the downstream analysis with traditional mark-recapture methods.
Developing New Rainfall Estimates to Identify the Likelihood of Agricultural Drought in Mesoamerica
NASA Astrophysics Data System (ADS)
Pedreros, D. H.; Funk, C. C.; Husak, G. J.; Michaelsen, J.; Peterson, P.; Lasndsfeld, M.; Rowland, J.; Aguilar, L.; Rodriguez, M.
2012-12-01
The population in Central America was estimated at ~40 million people in 2009, with 65% in rural areas directly relying on local agricultural production for subsistence, and additional urban populations relying on regional production. Mapping rainfall patterns and values in Central America is a complex task due to the rough topography and the influence of two oceans on either side of this narrow land mass. Characterization of precipitation amounts both in time and space is of great importance for monitoring agricultural food production for food security analysis. With the goal of developing reliable rainfall fields, the Famine Early warning Systems Network (FEWS NET) has compiled a dense set of historical rainfall stations for Central America through cooperation with meteorological services and global databases. The station database covers the years 1900-present with the highest density between 1970-2011. Interpolating station data by themselves does not provide a reliable result because it ignores topographical influences which dominate the region. To account for this, climatological rainfall fields were used to support the interpolation of the station data using a modified Inverse Distance Weighting process. By blending the station data with the climatological fields, a historical rainfall database was compiled for 1970-2011 at a 5km resolution for every five day interval. This new database opens the door to analysis such as the impact of sea surface temperature on rainfall patterns, changes to the typical dry spell during the rainy season, characterization of drought frequency and rainfall trends, among others. This study uses the historical database to identify the frequency of agricultural drought in the region and explores possible changes in precipitation patterns during the past 40 years. A threshold of 500mm of rainfall during the growing season was used to define agricultural drought for maize. This threshold was selected based on assessments of crop
Estimating Amazonian rainforest stability and the likelihood for large-scale forest dieback
NASA Astrophysics Data System (ADS)
Rammig, Anja; Thonicke, Kirsten; Jupp, Tim; Ostberg, Sebastian; Heinke, Jens; Lucht, Wolfgang; Cramer, Wolfgang; Cox, Peter
2010-05-01
Annually, tropical forests process approximately 18 Pg of carbon through respiration and photosynthesis - more than twice the rate of anthropogenic fossil fuel emissions. Current climate change may be transforming this carbon sink into a carbon source by changing forest structure and dynamics. Increasing temperatures and potentially decreasing precipitation and thus prolonged drought stress may lead to increasing physiological stress and reduced productivity for trees. Resulting decreases in evapotranspiration and therefore convective precipitation could further accelerate drought conditions and destabilize the tropical ecosystem as a whole and lead to an 'Amazon forest dieback'. The projected direction and intensity of climate change vary widely within the region and between different scenarios from climate models (GCMs). In the scope of a World Bank-funded study, we assessed the 24 General Circulation Models (GCMs) evaluated in the 4th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC-AR4) with respect to their capability to reproduce present-day climate in the Amazon basin using a Bayesian approach. With this approach, greater weight is assigned to the models that simulate well the annual cycle of rainfall. We then use the resulting weightings to create probability density functions (PDFs) for future forest biomass changes as simulated by the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJmL) to estimate the risk of potential Amazon rainforest dieback. Our results show contrasting changes in forest biomass throughout five regions of northern South America: If photosynthetic capacity and water use efficiency is enhanced by CO2, biomass increases across all five regions. However, if CO2-fertilisation is assumed to be absent or less important, then substantial dieback occurs in some scenarios and thus, the risk of forest dieback is considerably higher. Particularly affected are regions in the central Amazon basin. The range of
Hetsroni, Amir; Lowenstein, Hila
2013-02-01
Religiosity may change the direction of the effect of TV viewing on assessment of the likelihood of personal victimization and estimates concerning crime prevalence. A content analysis of a representative sample of TV programming (56 hours of prime-time shows) was done to identify the most common crimes on television, followed by a survey of a representative sample of the adult public in a large urban district (778 respondents) who were asked to estimate the prevalence of these crimes and to assess the likelihood of themselves being victimized. People who defined themselves as non-religious increased their estimates of prevalence for crimes often depicted on TV, as they reported more time watching TV (ordinary cultivation effect), whereas estimates regarding the prevalence of crime and assessment of victimization likelihood among religious respondents were lower with reports of more time devoted to television viewing (counter-cultivation effect).
NASA Astrophysics Data System (ADS)
Simons, Frederik J.; Olhede, Sofia C.
2013-06-01
Topography and gravity are geophysical fields whose joint statistical structure derives from interface-loading processes modulated by the underlying mechanics of isostatic and flexural compensation in the shallow lithosphere. Under this dual statistical-mechanistic viewpoint an estimation problem can be formulated where the knowns are topography and gravity and the principal unknown the elastic flexural rigidity of the lithosphere. In the guise of an equivalent `effective elastic thickness', this important, geographically varying, structural parameter has been the subject of many interpretative studies, but precisely how well it is known or how best it can be found from the data, abundant nonetheless, has remained contentious and unresolved throughout the last few decades of dedicated study. The popular methods whereby admittance or coherence, both spectral measures of the relation between gravity and topography, are inverted for the flexural rigidity, have revealed themselves to have insufficient power to independently constrain both it and the additional unknown initial-loading fraction and load-correlation factors, respectively. Solving this extremely ill-posed inversion problem leads to non-uniqueness and is further complicated by practical considerations such as the choice of regularizing data tapers to render the analysis sufficiently selective both in the spatial and spectral domains. Here, we rewrite the problem in a form amenable to maximum-likelihood estimation theory, which we show yields unbiased, minimum-variance estimates of flexural rigidity, initial-loading fraction and load correlation, each of those separably resolved with little a posteriori correlation between their estimates. We are also able to separately characterize the isotropic spectral shape of the initial-loading processes. Our procedure is well-posed and computationally tractable for the two-interface case. The resulting algorithm is validated by extensive simulations whose behaviour is
Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin
2013-01-01
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
Veklerov, E.; Llacer, J.; Hoffman, E.J.
1987-10-01
In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data.
Waits, L P; Sullivan, J; O'Brien, S J; Ward, R H
1999-10-01
The bear family (Ursidae) presents a number of phylogenetic ambiguities as the evolutionary relationships of the six youngest members (ursine bears) are largely unresolved. Recent mitochondrial DNA analyses have produced conflicting results with respect to the phylogeny of ursine bears. In an attempt to resolve these issues, we obtained 1916 nucleotides of mitochondrial DNA sequence data from six gene segments for all eight bear species and conducted maximum likelihood and maximum parsimony analyses on all fragments separately and combined. All six single-region gene trees gave different phylogenetic estimates; however, only for control region data was this significantly incongruent with the results from the combined data. The optimal phylogeny for the combined data set suggests that the giant panda is most basal followed by the spectacled bear. The sloth bear is the basal ursine bear, and there is weak support for a sister taxon relationship of the American and Asiatic black bears. The sun bear is sister taxon to the youngest clade containing brown bears and polar bears. Statistical analyses of alternate hypotheses revealed a lack of strong support for many of the relationships. We suggest that the difficulties surrounding the resolution of the evolutionary relationships of the Ursidae are linked to the existence of sequential rapid radiation events in bear evolution. Thus, unresolved branching orders during these time periods may represent an accurate representation of the evolutionary history of bear species. PMID:10508542
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain's response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
NASA Astrophysics Data System (ADS)
Sage, J. P.; Mayles, W. P. M.; Mayles, H. M.; Syndikus, I.
2014-10-01
Maximum likelihood estimation (MLE) is presented as a statistical tool to evaluate the contribution of measurement error to any measurement series where the same quantity is measured using different independent methods. The technique was tested against artificial data sets; generated for values of underlying variation in the quantity and measurement error between 0.5 mm and 3 mm. In each case the simulation parameters were determined within 0.1 mm. The technique was applied to analyzing external random positioning errors from positional audit data for 112 pelvic radiotherapy patients. Patient position offsets were measured using portal imaging analysis and external body surface measures. Using MLE to analyze all methods in parallel it was possible to ascertain the measurement error for each method and the underlying positional variation. In the (AP / Lat / SI) directions the standard deviations of the measured patient position errors from portal imaging were (3.3 mm / 2.3 mm / 1.9 mm), arising from underlying variations of (2.7 mm / 1.5 mm / 1.4 mm) and measurement uncertainties of (1.8 mm / 1.8 mm / 1.3 mm), respectively. The measurement errors agree well with published studies. MLE used in this manner could be applied to any study in which the same quantity is measured using independent methods.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Susko, Edward
2010-07-01
The most frequent measure of phylogenetic uncertainty for splits is bootstrap support. Although large bootstrap support intuitively suggests that a split in a tree is well supported, it has not been clear how large bootstrap support needs to be to conclude that there is significant evidence that a hypothesized split is present. Indeed, recent work has shown that bootstrap support is not first-order correct and thus cannot be directly used for hypothesis testing. We present methods that adjust bootstrap support values in a maximum likelihood (ML) setting so that they have an interpretation corresponding to P values in conventional hypothesis testing; for instance, adjusted bootstrap support larger than 95% occurs only 5% of the time if the split is not present. Through examples and simulation settings, it is found that adjustments always increase the level of support. We also find that the nature of the adjustment is fairly constant across parameter settings. Finally, we consider adjustments that take into account the data-dependent nature of many hypotheses about splits: the hypothesis that they are present is being tested because they are in the tree estimated through ML. Here, in contrast, we find that bootstrap probability often needs to be adjusted downwards.
Llacer, J.; Veklerov, E.; Nolan, D. ); Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J. )
1990-10-01
This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of {sup 18}F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab.
NASA Astrophysics Data System (ADS)
Zhang, Yong; Wang, Yulong
2016-01-01
We propose a general model to entirely describe XPM effects induced by 16QAM channels in hybrid QPSK/16QAM wavelength division multiplexed (WDM) systems. A power spectral density (PSD) formula is presented to predict the statistical properties of XPM effects at the end of dispersion management (DM) fiber links. We derive the analytical expression of phase error variance for optimizing block length of QPSK channel coherent receiver with decision-aided (DA) maximum-likelihood (ML) phase estimation (PE). With our theoretical analysis, the optimum block length can be employed to improve the performance of coherent receiver. Bit error rate (BER) performance in QPSK channel is evaluated and compared through both theoretical derivation and Monte Carlo simulation. The results show that by using the DA-ML with optimum block length, bit signal-to-noise ratio (SNR) improvement over DA-ML with fixed block length of 10, 20 and 40 at BER of 10-3 is 0.18 dB, 0.46 dB and 0.65 dB, respectively, when in-line residual dispersion is 0 ps/nm.
NASA Astrophysics Data System (ADS)
Simons, F. J.; Eggers, G. L.; Lewis, K. W.; Olhede, S. C.
2015-12-01
What numbers "capture" topography? If stationary, white, and Gaussian: mean and variance. But "whiteness" is strong; we are led to a "baseline" over which to compute means and variances. We then have subscribed to topography as a correlated process, and to the estimation (noisy, afftected by edge effects) of the parameters of a spatial or spectral covariance function. What if the covariance function or the point process itself aren't Gaussian? What if the region under study isn't regularly shaped or sampled? How can results from differently sized patches be compared robustly? We present a spectral-domain "Whittle" maximum-likelihood procedure that circumvents these difficulties and answers the above questions. The key is the Matern form, whose parameters (variance, range, differentiability) define the shape of the covariance function (Gaussian, exponential, ..., are all special cases). We treat edge effects in simulation and in estimation. Data tapering allows for the irregular regions. We determine the estimation variance of all parameters. And the "best" estimate may not be "good enough": we test whether the "model" itself warrants rejection. We illustrate our methodology on geologically mapped patches of Venus. Surprisingly few numbers capture planetary topography. We derive them, with uncertainty bounds, we simulate "new" realizations of patches that look to the geologists exactly as if they were derived from similar processes. Our approach holds in 1, 2, and 3 spatial dimensions, and generalizes to multiple variables, e.g. when topography and gravity are being considered jointly (perhaps linked by flexural rigidity, erosion, or other surface and sub-surface modifying processes). Our results have widespread implications for the study of planetary topography in the Solar System, and are interpreted in the light of trying to derive "process" from "parameters", the end goal to assign likely formation histories for the patches under consideration. Our results
Shao, Na; Yang, Jing; Li, Jianpeng; Shang, Hui-Fang
2014-01-01
Numerous voxel-based morphometry (VBM) studies on gray matter (GM) of patients with progressive supranuclear palsy (PSP) and Parkinson's disease (PD) have been conducted separately. Identifying the different neuroanatomical changes in GM resulting from PSP and PD through meta-analysis will aid the differential diagnosis of PSP and PD. In this study, a systematic review of VBM studies of patients with PSP and PD relative to healthy control (HC) in the Embase and PubMed databases from January 1995 to April 2013 was conducted. The anatomical distribution of the coordinates of GM differences was meta-analyzed using anatomical likelihood estimation. Separate maps of GM changes were constructed and subtraction meta-analysis was performed to explore the differences in GM abnormalities between PSP and PD. Nine PSP studies and 24 PD studies were included. GM reductions were present in the bilateral thalamus, basal ganglia, midbrain, insular cortex and inferior frontal gyrus, and left precentral gyrus and anterior cingulate gyrus in PSP. Atrophy of GM was concentrated in the bilateral middle and inferior frontal gyrus, precuneus, left precentral gyrus, middle temporal gyrus, right superior parietal lobule, and right cuneus in PD. Subtraction meta-analysis indicated that GM volume was lesser in the bilateral midbrain, thalamus, and insula in PSP compared with that in PD. Our meta-analysis indicated that PSP and PD shared a similar distribution of neuroanatomical changes in the frontal lobe, including inferior frontal gyrus and precentral gyrus, and that atrophy of the midbrain, thalamus, and insula are neuroanatomical markers for differentiating PSP from PD. PMID:24600372
NASA Astrophysics Data System (ADS)
Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.
2016-04-01
Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.
Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M
2016-04-21
Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented. PMID:27025665
ERIC Educational Resources Information Center
Tao, Jian; Shi, Ning-Zhong; Chang, Hua-Hua
2012-01-01
For mixed-type tests composed of both dichotomous and polytomous items, polytomous items often yield more information than dichotomous ones. To reflect the difference between the two types of items, polytomous items are usually pre-assigned with larger weights. We propose an item-weighted likelihood method to better assess examinees' ability…
Naftali, E; Makris, N C
2001-10-01
Analytic expressions for the first order bias and second order covariance of a general maximum likelihood estimate (MLE) are presented. These expressions are used to determine general analytic conditions on sample size, or signal-to-noise ratio (SNR), that are necessary for a MLE to become asymptotically unbiased and attain minimum variance as expressed by the Cramer-Rao lower bound (CRLB). The expressions are then evaluated for multivariate Gaussian data. The results can be used to determine asymptotic biases. variances, and conditions for estimator optimality in a wide range of inverse problems encountered in ocean acoustics and many other disciplines. The results are then applied to rigorously determine conditions on SNR necessary for the MLE to become unbiased and attain minimum variance in the classical active sonar and radar time-delay and Doppler-shift estimation problems. The time-delay MLE is the time lag at the peak value of a matched filter output. It is shown that the matched filter estimate attains the CRLB for the signal's position when the SNR is much larger than the kurtosis of the expected signal's energy spectrum. The Doppler-shift MLE exhibits dual behavior for narrow band analytic signals. In a companion paper, the general theory presented here is applied to the problem of estimating the range and depth of an acoustic source submerged in an ocean waveguide.
NASA Technical Reports Server (NTRS)
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
Not Available
1984-05-01
The disposal of radioactive wastes in deep geologic formations provides a means of isolating the waste from people until the radioactivity has decayed to safe levels. However, isolating people from the wastes is a different problem, since we do not know what the future condition of society will be. The Human Interference Task Force was convened by the US Department of Energy to determine whether reasonable means exist (or could be developed) to reduce the likelihood of future human unintentionally intruding on radioactive waste isolation systems. The task force concluded that significant reductions in the likelihood of human interference could be achieved, for perhaps thousands of years into the future, if appropriate steps are taken to communicate the existence of the repository. Consequently, for two years the task force directed most of its study toward the area of long-term communication. Methods are discussed for achieving long-term communication by using permanent markers and widely disseminated records, with various steps taken to provide multiple levels of protection against loss, destruction, and major language/societal changes. Also developed is the concept of a universal symbol to denote Caution - Biohazardous Waste Buried Here. If used for the thousands of non-radioactive biohazardous waste sites in this country alone, a symbol could transcend generations and language changes, thereby vastly improving the likelihood of successful isolation of all buried biohazardous wastes.
NASA Astrophysics Data System (ADS)
Kim, Younggwan; Suh, Youngjoo; Kim, Hoirin
2011-12-01
The role of the statistical model-based voice activity detector (SMVAD) is to detect speech regions from input signals using the statistical models of noise and noisy speech. The decision rule of SMVAD is based on the likelihood ratio test (LRT). The LRT-based decision rule may cause detection errors because of statistical properties of noise and speech signals. In this article, we first analyze the reasons why the detection errors occur and then propose two modified decision rules using reliable likelihood ratios (LRs). We also propose an effective weighting scheme considering spectral characteristics of noise and speech signals. In the experiments proposed in this study, with almost no additional computations, the proposed methods show significant performance improvement in various noise conditions. Experimental results also show that the proposed weighting scheme provides additional performance improvement over the two proposed SMVADs.
Thorn, Graeme J; King, John R
2016-01-01
The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777
2013-01-01
Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/. PMID:24683370
Paperno, Denis; Marelli, Marco; Tentori, Katya; Baroni, Marco
2014-11-01
This paper draws a connection between statistical word association measures used in linguistics and confirmation measures from epistemology. Having theoretically established the connection, we replicate, in the new context of the judgments of word co-occurrence, an intriguing finding from the psychology of reasoning, namely that confirmation values affect intuitions about likelihood. We show that the effect, despite being based in this case on very subtle statistical insights about thousands of words, is stable across three different experimental settings. Our theoretical and empirical results suggest that factors affecting traditional reasoning tasks are also at play when linguistic knowledge is probed, and they provide further evidence for the importance of confirmation in a new domain.
NASA Technical Reports Server (NTRS)
Kyle, H. Lee; Hucek, Richard R.; Groveman, Brian; Frey, Richard
1990-01-01
The archived Earth radiation budget (ERB) products produced from the Nimbus-7 ERB narrow field-of-view scanner are described. The principal products are broadband outgoing longwave radiation (4.5 to 50 microns), reflected solar radiation (0.2 to 4.8 microns), and the net radiation. Daily and monthly averages are presented on a fixed global equal area (500 sq km), grid for the period May 1979 to May 1980. Two independent algorithms are used to estimate the outgoing fluxes from the observed radiances. The algorithms are described and the results compared. The products are divided into three subsets: the Scene Radiance Tapes (SRT) contain the calibrated radiances; the Sorting into Angular Bins (SAB) tape contains the SAB produced shortwave, longwave, and net radiation products; and the Maximum Likelihood Cloud Estimation (MLCE) tapes contain the MLCE products. The tape formats are described in detail.
Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed
2016-07-01
Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. PMID:27038887
Albright, S G; Hook, E B
1980-01-01
The proportions of Down's syndrome livebirths associated with a Robertsonian translocation inherited from a carrier parent were estimated from data in the New York State Chromosome Registry and in two previous publications. Indirect estimates were made in each 5-year maternal age interval; these were derived from mutation rates for these translocations and maternal age specific prevalence rates in livebirths. The proportions diminished steadily with increasing maternal age. The ranges for the seven maternal age groups from under 20 to 45 to 49 were: 1.1 to 2.8%, 1.0 to 2.7%, 0.7 to 1.8%, 0.5 to 1.3%, 0.2 to 0.4%, 0.05 to 0.1%, and 0.02 to 0.04%. Direct estimates from the observed data could only be attempted for two age groups, women under 30 and those 30 or more. For those under 30 the range in proportions was 0.9 to 1.9% and for those 30 and over, 0.2 to 0.4%. In general the lowest proportions at any age were derived from New York State data and the highest from Japanese data. PMID:6451703
Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee
2015-01-01
In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916
NASA Astrophysics Data System (ADS)
Olivares, G.; Teferle, F. N.
2013-12-01
Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.
NASA Technical Reports Server (NTRS)
Lai, Jonathan Y.
1994-01-01
This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.
Sasaki, Tomohiko; Kondo, Osamu
2016-09-01
Recent theoretical progress potentially refutes past claims that paleodemographic estimations are flawed by statistical problems, including age mimicry and sample bias due to differential preservation. The life expectancy at age 15 of the Jomon period prehistoric populace in Japan was initially estimated to have been ∼16 years while a more recent analysis suggested 31.5 years. In this study, we provide alternative results based on a new methodology. The material comprises 234 mandibular canines from Jomon period skeletal remains and a reference sample of 363 mandibular canines of recent-modern Japanese. Dental pulp reduction is used as the age-indicator, which because of tooth durability is presumed to minimize the effect of differential preservation. Maximum likelihood estimation, which theoretically avoids age mimicry, was applied. Our methods also adjusted for the known pulp volume reduction rate among recent-modern Japanese to provide a better fit for observations in the Jomon period sample. Without adjustment for the known rate in pulp volume reduction, estimates of Jomon life expectancy at age 15 were dubiously long. However, when the rate was adjusted, the estimate results in a value that falls within the range of modern hunter-gatherers, with significantly better fit to the observations. The rate-adjusted result of 32.2 years more likely represents the true life expectancy of the Jomon people at age 15, than the result without adjustment. Considering ∼7% rate of antemortem loss of the mandibular canine observed in our Jomon period sample, actual life expectancy at age 15 may have been as high as ∼35.3 years.
Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F; Thielemans, Kris
2016-01-01
This work provides an insight into positron emission tomography (PET) joint image reconstruction/motion estimation (JRM) by maximization of the likelihood, where the probabilistic model accounts for warped attenuation. Our analysis shows that maximum-likelihood (ML) JRM returns the same reconstructed gates for any attenuation map (μ-map) that is a deformation of a given μ-map, regardless of its alignment with the PET gates. We derived a joint optimization algorithm accordingly, and applied it to simulated and patient gated PET data. We first evaluated the proposed algorithm on simulations of respiratory gated PET/CT data based on the XCAT phantom. Our results show that independently of which μ-map is used as input to JRM: (i) the warped μ-maps correspond to the gated μ-maps, (ii) JRM outperforms the traditional post-registration reconstruction and consolidation (PRRC) for hot lesion quantification and (iii) reconstructed gated PET images are similar to those obtained with gated μ-maps. This suggests that a breath-held μ-map can be used. We then applied JRM on patient data with a μ-map derived from a breath-held high resolution CT (HRCT), and compared the results with PRRC, where each reconstructed PET image was obtained with a corresponding cine-CT gated μ-map. Results show that JRM with breath-held HRCT achieves similar reconstruction to that using PRRC with cine-CT. This suggests a practical low-dose solution for implementation of motion-corrected respiratory gated PET/CT.
Matched-field depth estimation for active sonar.
Hickman, Granger; Krolik, Jeffrey L
2004-02-01
This work concerns the problem of estimating the depth of a submerged scatterer in a shallow-water ocean by using an active sonar and a horizontal receiver array. As in passive matched-field processing (MFP) techniques, numerical modeling of multipath propagation is used to facilitate localization. However, unlike passive MFP methods where estimation of source range is critically dependent on relative modal phase modeling, in active sonar source range is approximately known from travel-time measurements. Thus the proposed matched-field depth estimation (MFDE) method does not require knowledge of the complex relative multipath amplitudes which also depend on the unknown scatterer characteristics. Depth localization is achieved by modeling depth-dependent relative delays and elevation angle spreads between multipaths. A maximum likelihood depth estimate is derived under the assumption that returns from a sequence of pings are uncorrelated and the scatterer is at constant depth. The Cramér-Rao lower bound on depth estimation mean-square-error is computed and compared with Monte Carlo simulation results for a typical range-dependent, shallow-water Mediterranean environment. Depth estimation performance to within 10% of the water column depth is predicted at signal-to-noise ratios of greater than 10 dB. Real data results are reported for depth estimation of an echo repeater to within 10-m accuracy in this same shallow water environment.
Fosgate, G T; Adesiyun, A A; Hird, D W; Hietala, S K
2006-08-17
The likelihood ratio (LR) is a measure of association that quantifies how many more times likely a particular test result is from an infected animal compared to one that is uninfected. They are ratios of conditional probabilities and cannot be interpreted at the individual animal level without information concerning pretest probabilities. Their usefulness is that they can be used to update the prior belief that the individual has the outcome of interest through a modification of Bayes' theorem. Bayesian analytic techniques can be used for the evaluation of diagnostic tests and estimation of LRs when information concerning a gold standard is not available. As an example, these techniques were applied to the estimation of LRs for a competitive ELISA (c-ELISA) for diagnosis of Brucella abortus infection in cattle and water buffalo in Trinidad. Sera from four herds of cattle (n=391) and four herds of water buffalo (n=381) in Trinidad were evaluated for Brucella-specific antibodies using a c-ELISA. On the basis of previous serologic (agglutination) test results in the same animals, iterative simulation modeling was used to classify animals as positive or negative for Brucella infection. LRs were calculated for six categories of the c-ELISA proportion inhibition (PI) results pooled for cattle and water buffalo and yielded the following estimations (95% probability intervals): <0.10 PI, 0.05 (0-0.13); 0.10-0.249 PI, 0.11 (0.04-0.20); 0.25-0.349 PI, 0.77 (0.23-1.63); 0.35-0.499 PI, 3.22 (1.39-6.84); 0.50-0.749 PI, 17.9 (6.39-77.4); > or =0.75 PI, 423 (129-infinity). LRs are important for calculation of post-test probabilities and maintaining the quantitative nature of diagnostic test results.
Darnaude, Audrey M.
2016-01-01
Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing
NASA Astrophysics Data System (ADS)
Park, Ryeojin
This dissertation aims to investigate two different applications in optics using maximum-likelihood (ML) estimation. The first application of ML estimation is used in optical metrology. For this application, an innovative iterative search method called the synthetic phase-shifting (SPS) algorithm is proposed. This search algorithm is used for estimation of a wavefront that is described by a finite set of Zernike Fringe (ZF) polynomials. In this work, we estimate the ZF coefficient, or parameter values of the wavefront using a single interferogram obtained from a point-diffraction interferometer (PDI). In order to find the estimates, we first calculate the squared-difference between the measured and simulated interferograms. Under certain assumptions, this squared-difference image can be treated as an interferogram showing the phase difference between the true wavefront deviation and simulated wavefront deviation. The wavefront deviation is defined as the difference between the reference and the test wavefronts. We calculate the phase difference using a traditional phase-shifting technique without physical phase-shifters. We present a detailed forward model for the PDI interferogram, including the effect of the nite size of a detector pixel. The algorithm was validated with computational studies and its performance and constraints are discussed. A prototype PDI was built and the algorithm was also experimentally validated. A large wavefront deviation was successfully estimated without using null optics or physical phase-shifters. The experimental result shows that the proposed algorithm has great potential to provide an accurate tool for non-null testing. The second application of ML estimation is used in nuclear medical imaging. A high-resolution positron tomography scanner called BazookaPET is proposed. We have designed and developed a novel proof-of-concept detector element for a PET system called BazookaPET. In order to complete the PET configuration, at least
ERIC Educational Resources Information Center
Fennell, Mary L.; And Others
This document is part of a series of chapters described in SO 011 759. This chapter reports the results of Monte Carlo simulations designed to analyze problems of using maximum likelihood estimation (MLE: see SO 011 767) in research models which combine longitudinal and dynamic behavior data in studies of change. Four complications--censoring of…
ERIC Educational Resources Information Center
Paek, Insu; Wilson, Mark
2011-01-01
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Quasi-likelihood for Spatial Point Processes
Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus
2014-01-01
Summary Fitting regression models for intensity functions of spatial point processes is of great interest in ecological and epidemiological studies of association between spatially referenced events and geographical or environmental covariates. When Cox or cluster process models are used to accommodate clustering not accounted for by the available covariates, likelihood based inference becomes computationally cumbersome due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation which in practise is solved numerically. The derivation of the optimal estimating function has close similarities to the derivation of quasi-likelihood for standard data sets. The approximate solution is further equivalent to a quasi-likelihood score for binary spatial data. We therefore use the term quasi-likelihood for our optimal estimating function approach. We demonstrate in a simulation study and a data example that our quasi-likelihood method for spatial point processes is both statistically and computationally efficient. PMID:26041970
The phylogenetic likelihood library.
Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A
2015-03-01
We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL).
Estimation of avidin activity by two methods.
Borza, B; Marcheş, F; Repanovici, R; Burducea, O; Popa, L M
1991-01-01
The biological activity of avidin was estimated by two different methods. The spectrophotometric method used the avidin titration with biotin in the presence of 4 hydroxiazobenzen-2'carboxilic acid as indicator. In the radioisotopic determination the titration with tritiated biotin was accomplished. Both methods led to the same results, but the spectrophotometric one is less avidin expensive and more rapid, being more convenient.
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
2010-01-01
Background The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s) but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Results Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood), including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA) together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. Conclusions The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these algorithms. MetaPIGA v2
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
NASA Technical Reports Server (NTRS)
Pierson, W. J.
1982-01-01
The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.
ERIC Educational Resources Information Center
Spaniol, Julia; Davidson, Patrick S. R.; Kim, Alice S. N.; Han, Hua; Moscovitch, Morris; Grady, Cheryl L.
2009-01-01
The recent surge in event-related fMRI studies of episodic memory has generated a wealth of information about the neural correlates of encoding and retrieval processes. However, interpretation of individual studies is hampered by methodological differences, and by the fact that sample sizes are typically small. We submitted results from studies of…
Likelihood and clinical trials.
Hill, G; Forbes, W; Kozak, J; MacNeill, I
2000-03-01
The history of the application of statistical theory to the analysis of clinical trials is reviewed. The current orthodoxy is a somewhat illogical hybrid of the original theory of significance tests of Edgeworth, Karl Pearson, and Fisher, and the subsequent decision theory approach of Neyman, Egon Pearson, and Wald. This hegemony is under threat from Bayesian statisticians. A third approach is that of likelihood, stemming from the work of Fisher and Barnard. This approach is illustrated using hypothetical data from the Lancet articles by Bradford Hill, which introduced clinicians to statistical theory. PMID:10760630
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2009-01-01
Examination of the decadal variation of the number of El Nino onsets and El Nino-related months for the interval 1950-2008 clearly shows that the variation is better explained as one expressing normal fluctuation and not one related to global warming. Comparison of the recurrence periods for El Nino onsets against event durations for moderate/strong El Nino events results in a statistically important relationship that allows for the possible prediction of the onset for the next anticipated El Nino event. Because the last known El Nino was a moderate event of short duration (6 months), having onset in August 2006, unless it is a statistical outlier, one expects the next onset of El Nino probably in the latter half of 2009, with peak following in November 2009-January 2010. If true, then initial early extended forecasts of frequencies of tropical cyclones for the 2009 North Atlantic basin hurricane season probably should be revised slightly downward from near average-to-above average numbers to near average-to-below average numbers of tropical cyclones in 2009, especially as compared to averages since 1995, the beginning of the current high-activity interval for tropical cyclone activity.
Maximum likelihood clustering with dependent feature trees
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1984-01-01
Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.
NON-REGULAR MAXIMUM LIKELIHOOD ESTIMATION
Even though a body of data on the environmental occurrence of medicinal, government-approved ("ethical") pharmaceuticals has been growing over the last two decades (the subject of this book), nearly nothing is known about the disposition of illicit (illegal) drugs in th...
Augmented Likelihood Image Reconstruction.
Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M
2016-01-01
The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.
Sun, Yanqing; Sundaram, Rajeshwari; Zhao, Yichuan
2009-01-01
The Cox model with time-dependent coefficients has been studied by a number of authors recently. In this paper, we develop empirical likelihood (EL) pointwise confidence regions for the time-dependent regression coefficients via local partial likelihood smoothing. The EL simultaneous confidence bands for a linear combination of the coefficients are also derived based on the strong approximation methods. The empirical likelihood ratio is formulated through the local partial log-likelihood for the regression coefficient functions. Our numerical studies indicate that the EL pointwise/simultaneous confidence regions/bands have satisfactory finite sample performances. Compared with the confidence regions derived directly based on the asymptotic normal distribution of the local constant estimator, the EL confidence regions are overall tighter and can better capture the curvature of the underlying regression coefficient functions. Two data sets, the gastric cancer data and the Mayo Clinic primary biliary cirrhosis data, are analyzed using the proposed method. PMID:19838322
Parametric likelihood inference for interval censored competing risks data
Hudgens, Michael G.; Li, Chenxi
2014-01-01
Summary Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV. PMID:24400873
Revised activation estimates for silicon carbide
Heinisch, H.L.; Cheng, E.T.; Mann, F.M.
1996-10-01
Recent progress in nuclear data development for fusion energy systems includes a reevaluation of neutron activation cross sections for silicon and aluminum. Activation calculations using the newly compiled Fusion Evaluated Nuclear Data Library result in calculated levels of {sup 26}Al in irradiated silicon that are about an order of magnitude lower than the earlier calculated values. Thus, according to the latest internationally accepted nuclear data, SiC is much more attractive as a low activation material, even in first wall applications.
ERIC Educational Resources Information Center
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike
2011-01-01
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
cosmoabc: Likelihood-free inference for cosmology
NASA Astrophysics Data System (ADS)
Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.
2015-05-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
Estimating phytoplankton photosynthesis by active fluorescence
Falkowski, P.G.; Kolber, Z.
1992-01-01
Photosynthesis can be described by target theory, At low photon flux densities, photosynthesis is a linear function of irradiance (I), The number of reaction centers (n), their effective absorption capture cross section {sigma}, and a quantum yield {phi}. As photosynthesis becomes increasingly light saturated, an increased fraction of reaction centers close. At light saturation the maximum photosynthetic rate is given as the product of the number of reaction centers (n) and their maximum electron transport rate (I/{tau}). Using active fluorometry it is possible to measure non-destructively and in real time the fraction of open or closed reaction centers under ambient irradiance conditions in situ, as well as {sigma} and {phi} {tau} can be readily, calculated from knowledge of the light saturation parameter, I{sub k} (which can be deduced by in situ by active fluorescence measurements) and {sigma}. We built a pump and probe fluorometer, which is interfaced with a CTD. The instrument measures the fluorescence yield of a weak probe flash preceding (f{sub 0}) and succeeding (f{sub 0}) a saturating pump flash. Profiles of the these fluorescence yields are used to derive the instantaneous rate of gross photosynthesis in natural phytoplankton communities without any incubation. Correlations with short-term simulated in situ radiocarbon measurements are extremely high. The average slope between photosynthesis derived from fluorescence and that measured by radiocarbon is 1.15 and corresponds to the average photosynthetic quotient. The intercept is about 15% of the maximum radiocarbon uptake and corresponds to the average net community respiration. Profiles of photosynthesis and sections showing the variability in its composite parameters reveal a significant effect of nutrient availability on biomass specific rates of photosynthesis in the ocean.
Estimating phytoplankton photosynthesis by active fluorescence
Falkowski, P.G.; Kolber, Z.
1992-10-01
Photosynthesis can be described by target theory, At low photon flux densities, photosynthesis is a linear function of irradiance (I), The number of reaction centers (n), their effective absorption capture cross section {sigma}, and a quantum yield {phi}. As photosynthesis becomes increasingly light saturated, an increased fraction of reaction centers close. At light saturation the maximum photosynthetic rate is given as the product of the number of reaction centers (n) and their maximum electron transport rate (I/{tau}). Using active fluorometry it is possible to measure non-destructively and in real time the fraction of open or closed reaction centers under ambient irradiance conditions in situ, as well as {sigma} and {phi} {tau} can be readily, calculated from knowledge of the light saturation parameter, I{sub k} (which can be deduced by in situ by active fluorescence measurements) and {sigma}. We built a pump and probe fluorometer, which is interfaced with a CTD. The instrument measures the fluorescence yield of a weak probe flash preceding (f{sub 0}) and succeeding (f{sub 0}) a saturating pump flash. Profiles of the these fluorescence yields are used to derive the instantaneous rate of gross photosynthesis in natural phytoplankton communities without any incubation. Correlations with short-term simulated in situ radiocarbon measurements are extremely high. The average slope between photosynthesis derived from fluorescence and that measured by radiocarbon is 1.15 and corresponds to the average photosynthetic quotient. The intercept is about 15% of the maximum radiocarbon uptake and corresponds to the average net community respiration. Profiles of photosynthesis and sections showing the variability in its composite parameters reveal a significant effect of nutrient availability on biomass specific rates of photosynthesis in the ocean.
Automatic activity estimation based on object behaviour signature
NASA Astrophysics Data System (ADS)
Martínez-Pérez, F. E.; González-Fraga, J. A.; Tentori, M.
2010-08-01
Automatic estimation of human activities is a topic widely studied. However the process becomes difficult when we want to estimate activities from a video stream, because human activities are dynamic and complex. Furthermore, we have to take into account the amount of information that images provide, since it makes the modelling and estimation activities a hard work. In this paper we propose a method for activity estimation based on object behavior. Objects are located in a delimited observation area and their handling is recorded with a video camera. Activity estimation can be done automatically by analyzing the video sequences. The proposed method is called "signature recognition" because it considers a space-time signature of the behaviour of objects that are used in particular activities (e.g. patients' care in a healthcare environment for elder people with restricted mobility). A pulse is produced when an object appears in or disappears of the observation area. This means there is a change from zero to one or vice versa. These changes are produced by the identification of the objects with a bank of nonlinear correlation filters. Each object is processed independently and produces its own pulses; hence we are able to recognize several objects with different patterns at the same time. The method is applied to estimate three healthcare-related activities of elder people with restricted mobility.
Be the Volume: A Classroom Activity to Visualize Volume Estimation
ERIC Educational Resources Information Center
Mikhaylov, Jessica
2011-01-01
A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…
NASA Astrophysics Data System (ADS)
Shang, Yilun
2016-08-01
How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.
Maximum likelihood versus likelihood-free quantum system identification in the atom maser
NASA Astrophysics Data System (ADS)
Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin
2014-10-01
We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Model-Based Estimation of Active Knee Stiffness
Pfeifer, Serge; Hardegger, Michael; Vallery, Heike; List, Renate; Foresti, Mauro; Riener, Robert; Perreault, Eric J.
2013-01-01
Knee joint impedance varies substantially during physiological gait. Quantifying this modulation is critical for the design of transfemoral prostheses that aim to mimic physiological limb behavior. Conventional methods for quantifying joint impedance typically involve perturbing the joint in a controlled manner, and describing impedance as the dynamic relationship between applied perturbations and corresponding joint torques. These experimental techniques, however, are difficult to apply during locomotion without impeding natural movements. In this paper, we propose a method to estimate the elastic component of knee joint impedance that depends on muscle activation, often referred to as active knee stiffness. The method estimates stiffness using a musculoskeletal model of the leg and a model for activation-dependent short-range muscle stiffness. Muscle forces are estimated from measurements including limb kinematics, kinetics and muscle electromyograms. For isometric validation, we compare model estimates to measurements involving joint perturbations; measured stiffness is 17% lower than model estimates for extension, and 42% lower for flexion torques. We show that sensitivity of stiffness estimates to common approaches for estimating muscle force is small in isometric conditions. We also make initial estimates of how knee stiffness is modulated during gait, illustrating how this approach may be used to obtain parameters relevant to the design of transfemoral prostheses. PMID:22275672
Comparison of Estimated and Measured Muscle Activity During Inclined Walking.
Alexander, Nathalie; Schwameder, Hermann
2016-04-01
While inclined walking is a frequent daily activity, muscle forces during this activity have rarely been examined. Musculoskeletal models are commonly used to estimate internal forces in healthy populations, but these require a priori validation. The aim of this study was to compare estimated muscle activity using a musculoskeletal model with measured EMG data during inclined walking. Ten healthy male participants walked at different inclinations of 0°, ± 6°, ± 12°, and ± 18° on a ramp equipped with 2 force plates. Kinematics, kinetics, and muscle activity of the musculus (m.) biceps femoris, m. rectus femoris, m. vastus lateralis, m. tibialis anterior, and m. gastrocnemius lateralis were recorded. Agreement between estimated and measured muscle activity was determined via correlation coefficients, mean absolute errors, and trend analysis. Correlation coefficients between estimated and measured muscle activity for approximately 69% of the conditions were above 0.7. Mean absolute errors were rather high with only approximately 38% being ≤ 30%. Trend analysis revealed similar estimated and measured muscle activities for all muscles and tasks (uphill and downhill walking), except m. tibialis anterior during uphill walking. This model can be used for further analysis in similar groups of participants.
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
King, Tania L.; Thornton, Lukar E.; Bentley, Rebecca J.; Kavanagh, Anne M.
2015-01-01
Background Local destinations have previously been shown to be associated with higher levels of both physical activity and walking, but little is known about how the distribution of destinations is related to activity. Kernel density estimation is a spatial analysis technique that accounts for the location of features relative to each other. Using kernel density estimation, this study sought to investigate whether individuals who live near destinations (shops and service facilities) that are more intensely distributed rather than dispersed: 1) have higher odds of being sufficiently active; 2) engage in more frequent walking for transport and recreation. Methods The sample consisted of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Destinations within these areas were geocoded and kernel density estimates of destination intensity were created using kernels of 400m (meters), 800m and 1200m. Using multilevel logistic regression, the association between destination intensity (classified in quintiles Q1(least)—Q5(most)) and likelihood of: 1) being sufficiently active (compared to insufficiently active); 2) walking≥4/week (at least 4 times per week, compared to walking less), was estimated in models that were adjusted for potential confounders. Results For all kernel distances, there was a significantly greater likelihood of walking≥4/week, among respondents living in areas of greatest destinations intensity compared to areas with least destination intensity: 400m (Q4 OR 1.41 95%CI 1.02–1.96; Q5 OR 1.49 95%CI 1.06–2.09), 800m (Q4 OR 1.55, 95%CI 1.09–2.21; Q5, OR 1.71, 95%CI 1.18–2.48) and 1200m (Q4, OR 1.7, 95%CI 1.18–2.45; Q5, OR 1.86 95%CI 1.28–2.71). There was also evidence of associations between destination intensity and sufficient physical activity, however these associations were markedly attenuated when walking was included in the models. Conclusions This study, conducted within urban Melbourne, found that those who lived
Likelihood-Based Confidence Intervals in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Oort, Frans J.
2011-01-01
In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…
Estimation of spatiotemporal neural activity using radial basis function networks.
Anderson, R W; Das, S; Keller, E L
1998-12-01
We report a method using radial basis function (RBF) networks to estimate the time evolution of population activity in topologically organized neural structures from single-neuron recordings. This is an important problem in neuroscience research, as such estimates may provide insights into systems-level function of these structures. Since single-unit neural data tends to be unevenly sampled and highly variable under similar behavioral conditions, obtaining such estimates is a difficult task. In particular, a class of cells in the superior colliculus called buildup neurons can have very narrow regions of saccade vectors for which they discharge at high rates but very large surround regions over which they discharge at low, but not zero, levels. Estimating the dynamic movement fields for these cells for two spatial dimensions at closely spaced timed intervals is a difficult problem, and no general method has been described that can be applied to all buildup cells. Estimation of individual collicular cells' spatiotemporal movement fields is a prerequisite for obtaining reliable two-dimensional estimates of the population activity on the collicular motor map during saccades. Therefore, we have developed several computational-geometry-based algorithms that regularize the data before computing a surface estimation using RBF networks. The method is then expanded to the problem of estimating simultaneous spatiotemporal activity occurring across the superior colliculus during a single movement (the inverse problem). In principle, this methodology could be applied to any neural structure with a regular, two-dimensional organization, provided a sufficient spatial distribution of sampled neurons is available.
COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS
Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.; Næss, S. K.; Seljebotn, D. S.; Górski, K. M.; Huey, G.; Jewell, J. B.; Rocha, G.; Wehus, I. K.
2013-11-10
We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expression that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.
Targeted Maximum Likelihood Based Causal Inference: Part I
van der Laan, Mark J.
2010-01-01
Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions. In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is
Human ECG signal parameters estimation during controlled physical activity
NASA Astrophysics Data System (ADS)
Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz
2015-09-01
ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.
Robust state estimation for neural networks with discontinuous activations.
Liu, Xiaoyang; Cao, Jinde
2010-12-01
Discontinuous dynamical systems, particularly neural networks with discontinuous activation functions, arise in a number of applications and have received considerable research attention in recent years. In this paper, the robust state estimation problem is investigated for uncertain neural networks with discontinuous activations and time-varying delays, where the neuron-dependent nonlinear disturbance on the network outputs are only assumed to satisfy the local Lipschitz condition. Based on the theory of differential inclusions and nonsmooth analysis, several criteria are presented to guarantee the existence of the desired robust state estimator for the discontinuous neural networks. It is shown that the design of the state estimator for such networks can be achieved by solving some linear matrix inequalities, which are dependent on the size of the time derivative of the time-varying delays. Finally, numerical examples are given to illustrate the theoretical results.
Estimating Active Layer Thickness from Remotely Sensed Surface Deformation
NASA Astrophysics Data System (ADS)
Liu, L.; Schaefer, K. M.; Zhang, T.; Wahr, J. M.
2010-12-01
We estimate active layer thickness (ALT) from remotely sensed surface subsidence during thawing seasons derived from interferometric synthetic aperture radar (InSAR) measurements. Ground ice takes up more volume than ground water, so as the soil thaws in summer and the active layer deepens, the ground subsides. The volume of melted ground water during the summer thaw determines seasonal subsidence. ALT is defined as the maximum thaw depth at the end of a thawing season. By using InSAR to measure surface subsidence between the start and end of summer season, one can estimate the depth of thaw over a large area (typically 100 km by 100 km). We developed an ALT retrieval algorithm integrating InSAR-derived surface subsidence, observed soil texture, organic matter content, and moisture content. We validated this algorithm in the continuous permafrost area on the North Slope of Alaska. Based on InSAR measurements using ERS-1/2 SAR data, our estimated values match in situ measurements of ALT within 1--10 cm at Circumpolar Active Layer Monitoring (CALM) sites within the study area. The active layer plays a key role in land surface processes in cold regions. Current measurements of ALT using mechanical probing, frost/thaw tubes, or inferred from temperature measurements are of high quality, but limited in spatial coverage. Using InSAR to estimate ALT greatly expands the spatial coverage of ALT observations.
EIA Corrects Errors in Its Drilling Activity Estimates Series
1998-01-01
The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.
EIA Completes Corrections to Drilling Activity Estimates Series
1999-01-01
The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.
A hybrid likelihood algorithm for risk modelling.
Kellerer, A M; Kreisheimer, M; Chmelevsky, D; Barclay, D
1995-03-01
The risk of radiation-induced cancer is assessed through the follow-up of large cohorts, such as atomic bomb survivors or underground miners who have been occupationally exposed to radon and its decay products. The models relate to the dose, age and time dependence of the excess tumour rates, and they contain parameters that are estimated in terms of maximum likelihood computations. The computations are performed with the software package EPI-CURE, which contains the two main options of person-by person regression or of Poisson regression with grouped data. The Poisson regression is most frequently employed, but there are certain models that require an excessive number of cells when grouped data are used. One example involves computations that account explicitly for the temporal distribution of continuous exposures, as they occur with underground miners. In past work such models had to be approximated, but it is shown here that they can be treated explicitly in a suitably reformulated person-by person computation of the likelihood. The algorithm uses the familiar partitioning of the log-likelihood into two terms, L1 and L0. The first term, L1, represents the contribution of the 'events' (tumours). It needs to be evaluated in the usual way, but constitutes no computational problem. The second term, L0, represents the event-free periods of observation. It is, in its usual form, unmanageable for large cohorts. However, it can be reduced to a simple form, in which the number of computational steps is independent of cohort size. The method requires less computing time and computer memory, but more importantly it leads to more stable numerical results by obviating the need for grouping the data. The algorithm may be most relevant to radiation risk modelling, but it can facilitate the modelling of failure-time data in general. PMID:7604154
Efficient computations with the likelihood ratio distribution.
Kruijver, Maarten
2015-01-01
What is the probability that the likelihood ratio exceeds a threshold t, if a specified hypothesis is true? This question is asked, for instance, when performing power calculations for kinship testing, when computing true and false positive rates for familial searching and when computing the power of discrimination of a complex mixture. Answering this question is not straightforward, since there is are a huge number of possible genotypic combinations to consider. Different solutions are found in the literature. Several authors estimate the threshold exceedance probability using simulation. Corradi and Ricciardi [1] propose a discrete approximation to the likelihood ratio distribution which yields a lower and upper bound on the probability. Nothnagel et al. [2] use the normal distribution as an approximation to the likelihood ratio distribution. Dørum et al. [3] introduce an algorithm that can be used for exact computation, but this algorithm is computationally intensive, unless the threshold t is very large. We present three new approaches to the problem. Firstly, we show how importance sampling can be used to make the simulation approach significantly more efficient. Importance sampling is a statistical technique that turns out to work well in the current context. Secondly, we present a novel algorithm for computing exceedance probabilities. The algorithm is exact, fast and can handle relatively large problems. Thirdly, we introduce an approach that combines the novel algorithm with the discrete approximation of Corradi and Ricciardi. This last approach can be applied to very large problems and yields a lower and upper bound on the exceedance probability. The use of the different approaches is illustrated with examples from forensic genetics, such as kinship testing, familial searching and mixture interpretation. The algorithms are implemented in an R-package called DNAprofiles, which is freely available from CRAN.
Kollndorfer, K.; Krajnik, J.; Woitek, R.; Freiherr, J.; Prayer, D.; Schöpf, V.
2013-01-01
Multiple sclerosis (MS) is a chronic neurological disease, frequently affecting attention and working memory functions. Functional imaging studies investigating those functions in MS patients are hard to compare, as they include heterogeneous patient groups and use different paradigms for cognitive testing. The aim of this study was to investigate alterations in neuronal activation between MS patients and healthy controls performing attention and working memory tasks. Two meta-analyses of previously published fMRI studies investigating attention and working memory were conducted for MS patients and healthy controls, respectively. Resulting maps were contrasted to compare brain activation in patients and healthy controls. Significantly increased brain activation in the inferior parietal lobule and the dorsolateral prefrontal cortex was detected for healthy controls. In contrast, higher neuronal activation in MS patients was obtained in the left ventrolateral prefrontal cortex and the right premotor area. With this meta-analytic approach previous results of investigations examining cognitive function using fMRI are summarized and compared. Therefore a more general view on cognitive dysfunction in this heterogeneous disease is enabled. PMID:24056084
Estimating evaporative vapor generation from automobiles based on parking activities.
Dong, Xinyi; Tschantz, Michael; Fu, Joshua S
2015-07-01
A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity.
Total myrosinase activity estimates in brassica vegetable produce.
Dosz, Edward B; Ku, Kang-Mo; Juvik, John A; Jeffery, Elizabeth H
2014-08-13
Isothiocyanates, generated from the hydrolysis of glucosinolates in plants of the Brassicaceae family, promote health, including anticancer bioactivity. Hydrolysis requires the plant enzyme myrosinase, giving myrosinase a key role in health promotion by brassica vegetables. Myrosinase measurement typically involves isolating crude protein, potentially underestimating activity in whole foods. Myrosinase activity was estimated using unextracted fresh tissues of five broccoli and three kale cultivars, measuring the formation of allyl isothiocyanate (AITC) and/or glucose from exogenous sinigrin. A correlation between AITC and glucose formation was found, although activity was substantially lower measured as glucose release. Using exogenous sinigrin or endogenous glucoraphanin, concentrations of the hydrolysis products AITC and sulforaphane correlated (r = 0.859; p = 0.006), suggesting that broccoli shows no myrosinase selectivity among sinigrin and glucoraphanin. Measurement of AITC formation provides a novel, reliable estimation of myrosinase-dependent isothiocyanate formation suitable for use with whole vegetable food samples. PMID:25051514
Estimation of Evapotranspiration as a function of Photosynthetic Active Radiation
NASA Astrophysics Data System (ADS)
Wesley, E.; Migliaccio, K.; Judge, J.
2012-12-01
The purpose of this research project is to more accurately measure the water balance and energy movements to properly allocate water resources at the Snapper Creek Site in Miami-Dade County, FL, by quantifying and estimating evapotranspiration (ET). ET is generally estimated using weather based equations, this project focused on estimating ET as a function of Photosynthetic Active Radiation (PAR). The project objectives were first to compose a function of PAR and calculated coefficients that can accurately estimate daily ET values with the least amount of variables used in its estimation equation, and second, to compare the newly identified ET estimation PAR function to TURC estimations, in comparison to our actual Eddy Covariance (EC) ET data and determine the differences in ET values. PAR, volumetric water content (VWC), and temperature (T) data were quality checked and used in developing singular and multiple variable regression models fit with SigmaPlot software. Fifteen different ET estimation equations were evaluated against EC ET and TURC estimated ET using R2 and slope factors. The selected equation that best estimated EC ET was cross validated using a 5 month data set; its daily and monthly ET values and sums were compared against the commonly used TURC equation. Using a multiple variable regression model, an equation with three variables (i.e., VWC, T, and PAR) was identified that best fit EC ET daily data. However, a regression was also found that used only PAR and provided ET predictions of similar accuracy. The PAR based regression model predicted daily EC ET more accurately than the traditional TURC method. Using only PAR to estimate ET reduces the input variables as compared to using the TURC model which requires T and solar radiation. Thus, not only is the PAR approach more accurate but also more cost effective. The PAR-based ET estimation equation derived in this study may be over fit considering only 5 months of data were used to produce the PAR
Synthesizing regression results: a factored likelihood method.
Wu, Meng-Jia; Becker, Betsy Jane
2013-06-01
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported in the regression studies to calculate synthesized standardized slopes. It uses available correlations to estimate missing ones through a series of regressions, allowing us to synthesize correlations among variables as if each included study contained all the same variables. Great accuracy and stability of this method under fixed-effects models were found through Monte Carlo simulation. An example was provided to demonstrate the steps for calculating the synthesized slopes through sweep operators. By rearranging the predictors in the included regression models or omitting a relatively small number of correlations from those models, we can easily apply the factored likelihood method to many situations involving synthesis of linear models. Limitations and other possible methods for synthesizing more complicated models are discussed. Copyright © 2012 John Wiley & Sons, Ltd. PMID:26053653
Estimates of the global electric circuit from global thunderstorm activity
NASA Astrophysics Data System (ADS)
Hutchins, M. L.; Holzworth, R. H.; Brundell, J. B.
2013-12-01
The World Wide Lightning Location Network (WWLLN) has a global detection efficiency around 10%, however the network has been shown to identify 99% of thunderstorms (Jacobson, et al 2006, using WWLLN data from 2005). To create an estimate of the global electric circuit activity a clustering algorithm is applied to the WWLLN dataset to identify global thunderstorms from 2009 - 2013. The annual, seasonal, and regional thunderstorm activity is investigated with this new WWLLN thunderstorm dataset in order to examine the source behavior of the global electric circuit. From the clustering algorithm the total number of active thunderstorms is found every 30 minutes to create a measure of the global electric circuit source function. The clustering algorithm used is shown to be robust over parameter ranges related to real physical storm sizes and times. The thunderstorm groupings are verified with case study comparisons using satellite and radar data. It is found that there are on average 714 × 81 thunderstorms active at any given time. Similarly the highest average number of thunderstorms occurs in July (783 × 69) with the lowest in January (599 × 76). The annual and diurnal thunderstorm activity seen with the WWLLN thunderstorms is in contrast with the bimodal stroke activity seen by WWLLN. Through utilizing the global coverage and high time resolution of WWLLN, it is shown that the total active thunderstorm count is less than previous estimates based on compiled climatologies.
Use of historical information in a maximum-likelihood framework
Cohn, T.A.; Stedinger, J.R.
1987-01-01
This paper discusses flood-quantile estimators which can employ historical and paleoflood information, both when the magnitudes of historical flood peaks are known, and when only threshold-exceedance information is available. Maximum likelihood, quasi-maximum likelihood and curve fitting methods for simultaneous estimation of 1, 2 and 3 unknown parameters are examined. The information contained in a 100 yr record of historical observations, during which the flood perception threshold was near the 10 yr flood level (i.e., on average, one flood in ten is above the threshold and hence is recorded), is equivalent to roughly 43, 64 and 78 years of systematic record in terms of the improvement of the precision of 100 yr flood estimators when estimating 1, 2 and 3 parameters, respectively. With the perception threshold at the 100 yr flood level, the historical data was worth 13, 20 and 46 years of systematic data when estimating 1, 2 and 3 parameters, respectively. ?? 1987.
Improved maximum likelihood reconstruction of complex multi-generational pedigrees.
Sheehan, Nuala A; Bartlett, Mark; Cussens, James
2014-11-01
The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as
MARGINAL EMPIRICAL LIKELIHOOD AND SURE INDEPENDENCE FEATURE SCREENING
Chang, Jinyuan; Tang, Cheng Yong; Wu, Yichao
2013-01-01
We study a marginal empirical likelihood approach in scenarios when the number of variables grows exponentially with the sample size. The marginal empirical likelihood ratios as functions of the parameters of interest are systematically examined, and we find that the marginal empirical likelihood ratio evaluated at zero can be used to differentiate whether an explanatory variable is contributing to a response variable or not. Based on this finding, we propose a unified feature screening procedure for linear models and the generalized linear models. Different from most existing feature screening approaches that rely on the magnitudes of some marginal estimators to identify true signals, the proposed screening approach is capable of further incorporating the level of uncertainties of such estimators. Such a merit inherits the self-studentization property of the empirical likelihood approach, and extends the insights of existing feature screening methods. Moreover, we show that our screening approach is less restrictive to distributional assumptions, and can be conveniently adapted to be applied in a broad range of scenarios such as models specified using general moment conditions. Our theoretical results and extensive numerical examples by simulations and data analysis demonstrate the merits of the marginal empirical likelihood approach. PMID:24415808
Likelihood approaches for the invariant density ratio model with biased-sampling data
Shen, Yu; Ning, Jing; Qin, Jing
2012-01-01
The full likelihood approach in statistical analysis is regarded as the most efficient means for estimation and inference. For complex length-biased failure time data, computational algorithms and theoretical properties are not readily available, especially when a likelihood function involves infinite-dimensional parameters. Relying on the invariance property of length-biased failure time data under the semiparametric density ratio model, we present two likelihood approaches for the estimation and assessment of the difference between two survival distributions. The most efficient maximum likelihood estimators are obtained by the em algorithm and profile likelihood. We also provide a simple numerical method for estimation and inference based on conditional likelihood, which can be generalized to k-arm settings. Unlike conventional survival data, the mean of the population failure times can be consistently estimated given right-censored length-biased data under mild regularity conditions. To check the semiparametric density ratio model assumption, we use a test statistic based on the area between two survival distributions. Simulation studies confirm that the full likelihood estimators are more efficient than the conditional likelihood estimators. We analyse an epidemiological study to illustrate the proposed methods. PMID:23843663
PAML 4: phylogenetic analysis by maximum likelihood.
Yang, Ziheng
2007-08-01
PAML, currently in version 4, is a package of programs for phylogenetic analyses of DNA and protein sequences using maximum likelihood (ML). The programs may be used to compare and test phylogenetic trees, but their main strengths lie in the rich repertoire of evolutionary models implemented, which can be used to estimate parameters in models of sequence evolution and to test interesting biological hypotheses. Uses of the programs include estimation of synonymous and nonsynonymous rates (d(N) and d(S)) between two protein-coding DNA sequences, inference of positive Darwinian selection through phylogenetic comparison of protein-coding genes, reconstruction of ancestral genes and proteins for molecular restoration studies of extinct life forms, combined analysis of heterogeneous data sets from multiple gene loci, and estimation of species divergence times incorporating uncertainties in fossil calibrations. This note discusses some of the major applications of the package, which includes example data sets to demonstrate their use. The package is written in ANSI C, and runs under Windows, Mac OSX, and UNIX systems. It is available at -- (http://abacus.gene.ucl.ac.uk/software/paml.html).
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Fast inference in generalized linear models via expected log-likelihoods.
Ramirez, Alexandro D; Paninski, Liam
2014-04-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.
Estimating relative demand for wildlife: Conservation activity indicators
NASA Astrophysics Data System (ADS)
Gray, Gary G.; Larson, Joseph S.
1982-09-01
An alternative method of estimating relative demand among nonconsumptive uses of wildlife and among wildlife species is proposed. A demand intensity score (DIS), derived from the relative extent of an individual's involvement in outdoor recreation and conservation activities, is used as a weighting device to adjust the importance of preference rankings for wildlife uses and wildlife species relative to other members of a survey population. These adjusted preference rankings were considered to reflect relative demand levels (RDLs) for wildlife uses and for species by the survey population. This technique may be useful where it is not possible or desirable to estimate demand using traditional economic means. In one of the findings from a survey of municipal conservation commission members in Massachusetts, presented as an illustration of this methodology, poisonous snakes were ranked third in preference among five groups of reptiles. The relative demand level for poisonous snakes, however, was last among the five groups.
On the precision of automated activation time estimation
NASA Technical Reports Server (NTRS)
Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.
1988-01-01
We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.
Expressed Likelihood as Motivator: Creating Value through Engaging What’s Real
Higgins, E. Tory; Franks, Becca; Pavarini, Dana; Sehnert, Steen; Manley, Katie
2012-01-01
Our research tested two predictions regarding how likelihood can have motivational effects as a function of how a probability is expressed. We predicted that describing the probability of a future event that could be either A or B using the language of high likelihood (“80% A”) rather than low likelihood (“20% B”), i.e., high rather than low expressed likelihood, would make a present activity more real and engaging, as long as the future event had properties relevant to the present activity. We also predicted that strengthening engagement from the high (vs. low) expressed likelihood of a future event would intensify the value of present positive and negative objects (in opposite directions). Both predictions were supported. There was also evidence that this intensification effect from expressed likelihood was independent of the actual probability or valence of the future event. What mattered was whether high versus low likelihood language was used to describe the future event. PMID:23940411
Fast inference in generalized linear models via expected log-likelihoods
Ramirez, Alexandro D.; Paninski, Liam
2015-01-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289
An Empirical Likelihood Method for Semiparametric Linear Regression with Right Censored Data
Fang, Kai-Tai; Li, Gang; Lu, Xuyang; Qin, Hong
2013-01-01
This paper develops a new empirical likelihood method for semiparametric linear regression with a completely unknown error distribution and right censored survival data. The method is based on the Buckley-James (1979) estimating equation. It inherits some appealing properties of the complete data empirical likelihood method. For example, it does not require variance estimation which is problematic for the Buckley-James estimator. We also extend our method to incorporate auxiliary information. We compare our method with the synthetic data empirical likelihood of Li and Wang (2003) using simulations. We also illustrate our method using Stanford heart transplantation data. PMID:23573169
Spectral estimators of absorbed photosynthetically active radiation in corn canopies
NASA Technical Reports Server (NTRS)
Gallo, K. P.; Daughtry, C. S. T.; Bauer, M. E.
1984-01-01
Most models of crop growth and yield require an estimate of canopy leaf area index (LAI) or absorption of radiation. Relationships between photosynthetically active radiation (PAR) absorbed by corn canopies and the spectral reflectance of the canopies were investigated. Reflectance factor data were acquired with a LANDSAT MSS band radiometer. From planting to silking, the three spectrally predicted vegetation indices examined were associated with more than 95% of the variability in absorbed PAR. The relationships developed between absorbed PAR and the three indices were evaluated with reflectance factor data acquired from corn canopies planted in 1979 through 1982. Seasonal cumulations of measured LAI and each of the three indices were associated with greater than 50% of the variation in final grain yields from the test years. Seasonal cumulations of daily absorbed PAR were associated with up to 73% of the variation in final grain yields. Absorbed PAR, cumulated through the growing season, is a better indicator of yield than cumulated leaf area index. Absorbed PAR may be estimated reliably from spectral reflectance data of crop canopies.
Spectral estimators of absorbed photosynthetically active radiation in corn canopies
NASA Technical Reports Server (NTRS)
Gallo, K. P.; Daughtry, C. S. T.; Bauer, M. E.
1985-01-01
Most models of crop growth and yield require an estimate of canopy leaf area index (LAI) or absorption of radiation. Relationships between photosynthetically active radiation (PAR) absorbed by corn canopies and the spectral reflectance of the canopies were investigated. Reflectance factor data were acquired with a Landsat MSS band radiometer. From planting to silking, the three spectrally predicted vegetation indices examined were associated with more than 95 percent of the variability in absorbed PAR. The relationships developed between absorbed PAR and the three indices were evaluated with reflectance factor data acquired from corn canopies planted in 1979 through 1982. Seasonal cumulations of measured LAI and each of the three indices were associated with greater than 50 percent of the variation in final grain yields from the test years. Seasonal cumulations of daily absorbed PAR were associated with up to 73 percent of the variation in final grain yields. Absorbed PAR, cumulated through the growing season, is a better indicator of yield than cumulated leaf area index. Absorbed PAR may be estimated reliably from spectral reflectance data of crop canopies.
Intercepted photosynthetically active radiation estimated by spectral reflectance
NASA Technical Reports Server (NTRS)
Hatfield, J. L.; Asrar, G.; Kanemasu, E. T.
1984-01-01
Interception of photosynthetically active radiation (PAR) was evaluated relative to greenness and normalized difference (MSS (7-5)/(7+5) for five planting dates of wheat for 1978-79 and 1979-80 at Phoenix, Arizona. Intercepted PAR was calculated from leaf area index and stage of growth. Linear relatinships were found with greeness and normalized difference with separate relatinships describing growth and senescence of the crop. Normalized difference was significantly better than greenness for all planting dates. For the leaf area growth portion of the season the relation between PAR interception and normalized difference was the same over years and planting dates. For the leaf senescence phase the relationships showed more variability due to the lack of data on light interception in sparse and senescing canopies. Normalized difference could be used to estimate PAR interception throughout a growing season.
Maximum likelihood inference of reticulate evolutionary histories.
Yu, Yun; Dong, Jianrong; Liu, Kevin J; Nakhleh, Luay
2014-11-18
Hybridization plays an important role in the evolution of certain groups of organisms, adaptation to their environments, and diversification of their genomes. The evolutionary histories of such groups are reticulate, and methods for reconstructing them are still in their infancy and have limited applicability. We present a maximum likelihood method for inferring reticulate evolutionary histories while accounting simultaneously for incomplete lineage sorting. Additionally, we propose methods for assessing confidence in the amount of reticulation and the topology of the inferred evolutionary history. Our method obtains accurate estimates of reticulate evolutionary histories on simulated datasets. Furthermore, our method provides support for a hypothesis of a reticulate evolutionary history inferred from a set of house mouse (Mus musculus) genomes. As evidence of hybridization in eukaryotic groups accumulates, it is essential to have methods that infer reticulate evolutionary histories. The work we present here allows for such inference and provides a significant step toward putting phylogenetic networks on par with phylogenetic trees as a model of capturing evolutionary relationships. PMID:25368173
Physically constrained maximum likelihood mode filtering.
Papp, Joseph C; Preisig, James C; Morozov, Andrey K
2010-04-01
Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.
Likelihood-based population independent component analysis
Eloyan, Ani; Crainiceanu, Ciprian M.; Caffo, Brian S.
2013-01-01
Independent component analysis (ICA) is a widely used technique for blind source separation, used heavily in several scientific research areas including acoustics, electrophysiology, and functional neuroimaging. We propose a scalable two-stage iterative true group ICA methodology for analyzing population level functional magnetic resonance imaging (fMRI) data where the number of subjects is very large. The method is based on likelihood estimators of the underlying source densities and the mixing matrix. As opposed to many commonly used group ICA algorithms, the proposed method does not require significant data reduction by a 2-fold singular value decomposition. In addition, the method can be applied to a large group of subjects since the memory requirements are not restrictive. The performance of our approach is compared with a commonly used group ICA algorithm via simulation studies. Furthermore, the proposed method is applied to a large collection of resting state fMRI datasets. The results show that established brain networks are well recovered by the proposed algorithm. PMID:23314416
A likelihood approach to calculating risk support intervals
Leal, S.M.; Ott, J. )
1994-05-01
Genetic risks are usually computed under the assumption that genetic parameters, such as the recombination fraction, are known without error. Uncertainty in the estimates of these parameters must translate into uncertainty regarding the risk. To allow for uncertainties in parameter values, one may employ Bayesian techniques or, in a maximum-likelihood framework, construct a support interval (SI) for the risk. Here the authors have implemented the latter approach. The SI for the risk is based on the SIs of parameters involved in the pedigree likelihood. As an empirical example, the SI for the risk was calculated for probands who are members of chronic spinal muscular atrophy kindreds. In order to evaluate the accuracy of a risk in genetic counseling situations, the authors advocate that, in addition to a point estimate, an SI for the risk should be calculated. 16 refs., 1 fig., 1 tab.
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
Maximum likelihood synchronizer for binary overlapping PCM/NRZ signals.
NASA Technical Reports Server (NTRS)
Wang, C. D.; Noack, T. L.; Morris, J. F.
1973-01-01
A maximum likelihood parameter estimation technique for the self bit synchronization problem is investigated. The input sequence to the bit synchronizer is a sequence of binary overlapping PCM/NRZ signal in the presence of white Gaussian noise with zero mean and known variance. The resulting synchronizer consists of matched filters, a transition device and a weighting function. Finally, the performance is examined by Monte Carlo simulations.
Spatially explicit maximum likelihood methods for capture-recapture studies.
Borchers, D L; Efford, M G
2008-06-01
Live-trapping capture-recapture studies of animal populations with fixed trap locations inevitably have a spatial component: animals close to traps are more likely to be caught than those far away. This is not addressed in conventional closed-population estimates of abundance and without the spatial component, rigorous estimates of density cannot be obtained. We propose new, flexible capture-recapture models that use the capture locations to estimate animal locations and spatially referenced capture probability. The models are likelihood-based and hence allow use of Akaike's information criterion or other likelihood-based methods of model selection. Density is an explicit parameter, and the evaluation of its dependence on spatial or temporal covariates is therefore straightforward. Additional (nonspatial) variation in capture probability may be modeled as in conventional capture-recapture. The method is tested by simulation, using a model in which capture probability depends only on location relative to traps. Point estimators are found to be unbiased and standard error estimators almost unbiased. The method is used to estimate the density of Red-eyed Vireos (Vireo olivaceus) from mist-netting data from the Patuxent Research Refuge, Maryland, U.S.A. Estimates agree well with those from an existing spatially explicit method based on inverse prediction. A variety of additional spatially explicit models are fitted; these include models with temporal stratification, behavioral response, and heterogeneous animal home ranges. PMID:17970815
Reconstruction of ancestral genomic sequences using likelihood.
Elias, Isaac; Tuller, Tamir
2007-03-01
A challenging task in computational biology is the reconstruction of genomic sequences of extinct ancestors, given the phylogenetic tree and the sequences at the leafs. This task is best solved by calculating the most likely estimate of the ancestral sequences, along with the most likely edge lengths. We deal with this problem and also the variant in which the phylogenetic tree in addition to the ancestral sequences need to be estimated. The latter problem is known to be NP-hard, while the computational complexity of the former is unknown. Currently, all algorithms for solving these problems are heuristics without performance guarantees. The biological importance of these problems calls for developing better algorithms with guarantees of finding either optimal or approximate solutions. We develop approximation, fix parameter tractable (FPT), and fast heuristic algorithms for two variants of the problem; when the phylogenetic tree is known and when it is unknown. The approximation algorithm guarantees a solution with a log-likelihood ratio of 2 relative to the optimal solution. The FPT has a running time which is polynomial in the length of the sequences and exponential in the number of taxa. This makes it useful for calculating the optimal solution for small trees. Moreover, we combine the approximation algorithm and the FPT into an algorithm with arbitrary good approximation guarantee (PTAS). We tested our algorithms on both synthetic and biological data. In particular, we used the FPT for computing the most likely ancestral mitochondrial genomes of hominidae (the great apes), thereby answering an interesting biological question. Moreover, we show how the approximation algorithms find good solutions for reconstructing the ancestral genomes for a set of lentiviruses (relatives of HIV). Supplementary material of this work is available at www.nada.kth.se/~isaac/publications/aml/aml.html.
Maximum likelihood tuning of a vehicle motion filter
NASA Technical Reports Server (NTRS)
Trankle, Thomas L.; Rabin, Uri H.
1990-01-01
This paper describes the use of maximum likelihood parameter estimation unknown parameters appearing in a nonlinear vehicle motion filter. The filter uses the kinematic equations of motion of a rigid body in motion over a spherical earth. The nine states of the filter represent vehicle velocity, attitude, and position. The inputs to the filter are three components of translational acceleration and three components of angular rate. Measurements used to update states include air data, altitude, position, and attitude. Expressions are derived for the elements of filter matrices needed to use air data in a body-fixed frame with filter states expressed in a geographic frame. An expression for the likelihood functions of the data is given, along with accurate approximations for the function's gradient and Hessian with respect to unknown parameters. These are used by a numerical quasi-Newton algorithm for maximizing the likelihood function of the data in order to estimate the unknown parameters. The parameter estimation algorithm is useful for processing data from aircraft flight tests or for tuning inertial navigation systems.
Likelihood Methods for Testing Group Problem Solving Models with Censored Data.
ERIC Educational Resources Information Center
Regal, Ronald R.; Larntz, Kinley
1978-01-01
Models relating individual and group problem solving solution times under the condition of limited time (time limit censoring) are presented. Maximum likelihood estimation of parameters and a goodness of fit test are presented. (Author/JKS)
Likelihood methods for cluster dark energy surveys
Hu, Wayne; Cohn, J. D.
2006-03-15
Galaxy cluster counts at high redshift, binned into spatial pixels and binned into ranges in an observable proxy for mass, contain a wealth of information on both the dark energy equation of state and the mass selection function required to extract it. The likelihood of the number counts follows a Poisson distribution whose mean fluctuates with the large-scale structure of the universe. We develop a joint likelihood method that accounts for these distributions. Maximization of the likelihood over a theoretical model that includes both the cosmology and the observable-mass relations allows for a joint extraction of dark energy and cluster structural parameters.
Optimal stimulus scheduling for active estimation of evoked brain networks
NASA Astrophysics Data System (ADS)
Kafashan, MohammadMehdi; Ching, ShiNung
2015-12-01
Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.
Model Fit after Pairwise Maximum Likelihood
Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.
2016-01-01
Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136
Maximum-Likelihood Detection Of Noncoherent CPM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T.; Pande, Vijay S.
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T; Wu, Shih-Wei
2015-01-28
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model. PMID:26160753
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
NASA Astrophysics Data System (ADS)
Song, Qiong; Wang, Yuehuan; Yan, Xiaoyun; Liu, Dang
2015-12-01
In this paper we propose an independent sequential maximum likelihood approach to address the joint track-to-track association and bias removal in multi-sensor information fusion systems. First, we enumerate all kinds of association situation following by estimating a bias for each association. Then we calculate the likelihood of each association after bias compensated. Finally we choose the maximum likelihood of all association situations as the association result and the corresponding bias estimation is the registration result. Considering the high false alarm and interference, we adopt the independent sequential association to calculate the likelihood. Simulation results show that our proposed method can give out the right association results and it can estimate the bias precisely simultaneously for small number of targets in multi-sensor fusion system.
Implementing Restricted Maximum Likelihood Estimation in Structural Equation Models
ERIC Educational Resources Information Center
Cheung, Mike W.-L.
2013-01-01
Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects…
Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood
Bondell, Howard D.; Stefanski, Leonard A.
2013-01-01
Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805
Maximum-likelihood approach to strain imaging using ultrasound
Insana, M. F.; Cook, L. T.; Bilgen, M.; Chaturvedi, P.; Zhu, Y.
2009-01-01
A maximum-likelihood (ML) strategy for strain estimation is presented as a framework for designing and evaluating bioelasticity imaging systems. Concepts from continuum mechanics, signal analysis, and acoustic scattering are combined to develop a mathematical model of the ultrasonic waveforms used to form strain images. The model includes three-dimensional (3-D) object motion described by affine transformations, Rayleigh scattering from random media, and 3-D system response functions. The likelihood function for these waveforms is derived to express the Fisher information matrix and variance bounds for displacement and strain estimation. The ML estimator is a generalized cross correlator for pre- and post-compression echo waveforms that is realized by waveform warping and filtering prior to cross correlation and peak detection. Experiments involving soft tissuelike media show the ML estimator approaches the Cramér–Rao error bound for small scaling deformations: at 5 MHz and 1.2% compression, the predicted lower bound for displacement errors is 4.4 µm and the measured standard deviation is 5.7 µm. PMID:10738797
Maximum likelihood as a common computational framework in tomotherapy.
Olivera, G H; Shepard, D M; Reckwerdt, P J; Ruchala, K; Zachman, J; Fitchard, E E; Mackie, T R
1998-11-01
Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. PMID:9832016
Monocular distance estimation from optic flow during active landing maneuvers.
van Breugel, Floris; Morgansen, Kristi; Dickinson, Michael H
2014-06-01
Vision is arguably the most widely used sensor for position and velocity estimation in animals, and it is increasingly used in robotic systems as well. Many animals use stereopsis and object recognition in order to make a true estimate of distance. For a tiny insect such as a fruit fly or honeybee, however, these methods fall short. Instead, an insect must rely on calculations of optic flow, which can provide a measure of the ratio of velocity to distance, but not either parameter independently. Nevertheless, flies and other insects are adept at landing on a variety of substrates, a behavior that inherently requires some form of distance estimation in order to trigger distance-appropriate motor actions such as deceleration or leg extension. Previous studies have shown that these behaviors are indeed under visual control, raising the question: how does an insect estimate distance solely using optic flow? In this paper we use a nonlinear control theoretic approach to propose a solution for this problem. Our algorithm takes advantage of visually controlled landing trajectories that have been observed in flies and honeybees. Finally, we implement our algorithm, which we term dynamic peering, using a camera mounted to a linear stage to demonstrate its real-world feasibility.
Growing local likelihood network: Emergence of communities
NASA Astrophysics Data System (ADS)
Chen, S.; Small, M.
2015-10-01
In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a likelihood-based method, where at each step we modify the networks by changing their likelihood to be closer to the expected degree distribution. Combining the local information and the likelihood method, we grow networks that exhibit novel features. We discover that the likelihood method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.
Numerical likelihood analysis of cosmic ray anisotropies
Carlos Hojvat et al.
2003-07-02
A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.
Closed-loop carrier phase synchronization techniques motivated by likelihood functions
NASA Technical Reports Server (NTRS)
Tsou, H.; Hinedi, S.; Simon, M.
1994-01-01
This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.
Likelihood reinstates Archaeopteryx as a primitive bird
Lee, Michael S. Y.; Worthy, Trevor H.
2012-01-01
The widespread view that Archaeopteryx was a primitive (basal) bird has been recently challenged by a comprehensive phylogenetic analysis that placed Archaeopteryx with deinonychosaurian theropods. The new phylogeny suggested that typical bird flight (powered by the front limbs only) either evolved at least twice, or was lost/modified in some deinonychosaurs. However, this parsimony-based result was acknowledged to be weakly supported. Maximum-likelihood and related Bayesian methods applied to the same dataset yield a different and more orthodox result: Archaeopteryx is restored as a basal bird with bootstrap frequency of 73 per cent and posterior probability of 1. These results are consistent with a single origin of typical (forelimb-powered) bird flight. The Archaeopteryx–deinonychosaur clade retrieved by parsimony is supported by more characters (which are on average more homoplasious), whereas the Archaeopteryx–bird clade retrieved by likelihood-based methods is supported by fewer characters (but on average less homoplasious). Both positions for Archaeopteryx remain plausible, highlighting the hazy boundary between birds and advanced theropods. These results also suggest that likelihood-based methods (in addition to parsimony) can be useful in morphological phylogenetics. PMID:22031726
ERIC Educational Resources Information Center
Gronert, Joie; Marshall, Sally
Developed for elementary teachers, this activity unit is designed to teach students the importance of estimation in developing quantitative thinking. Nine ways in which estimation is useful to students are listed, and five general guidelines are offered to the teacher for planning estimation activities. Specific guidelines are provided for…
Estimating Physical Activity in Youth Using a Wrist Accelerometer
Crouter, Scott E.; Flynn, Jennifer I.; Bassett, David R.
2014-01-01
PURPOSE The purpose of this study was to develop and validate methods for analyzing wrist accelerometer data in youth. METHODS 181 youth (mean±SD; age, 12.0±1.5 yrs) completed 30-min of supine rest and 8-min each of 2 to 7 structured activities (selected from a list of 25). Receiver Operator Characteristic (ROC) curves and regression analyses were used to develop prediction equations for energy expenditure (child-METs; measured activity VO2 divided by measured resting VO2) and cut-points for computing time spent in sedentary behaviors (SB), light (LPA), moderate (MPA), and vigorous (VPA) physical activity. Both vertical axis (VA) and vector magnitude (VM) counts per 5 seconds were used for this purpose. The validation study included 42 youth (age, 12.6±0.8 yrs) who completed approximately 2-hrs of unstructured PA. During all measurements, activity data were collected using an ActiGraph GT3X or GT3X+, positioned on the dominant wrist. Oxygen consumption was measured using a Cosmed K4b2. Repeated measures ANOVAs were used to compare measured vs predicted child-METs (regression only), and time spent in SB, LPA, MPA, and VPA. RESULTS All ROC cut-points were similar for area under the curve (≥0.825), sensitivity (≥0.756), and specificity (≥0.634) and they significantly underestimated LPA and overestimated VPA (P<0.05). The VA and VM regression models were within ±0.21 child-METs of mean measured child-METs and ±2.5 minutes of measured time spent in SB, LPA, MPA, and VPA, respectively (P>0.05). CONCLUSION Compared to measured values, the VA and VM regression models developed on wrist accelerometer data had insignificant mean bias for child-METs and time spent in SB, LPA, MPA, and VPA; however they had large individual errors. PMID:25207928
An alternative method to measure the likelihood of a financial crisis in an emerging market
NASA Astrophysics Data System (ADS)
Özlale, Ümit; Metin-Özcan, Kıvılcım
2007-07-01
This paper utilizes an early warning system in order to measure the likelihood of a financial crisis in an emerging market economy. We introduce a methodology, where we can both obtain a likelihood series and analyze the time-varying effects of several macroeconomic variables on this likelihood. Since the issue is analyzed in a non-linear state space framework, the extended Kalman filter emerges as the optimal estimation algorithm. Taking the Turkish economy as our laboratory, the results indicate that both the derived likelihood measure and the estimated time-varying parameters are meaningful and can successfully explain the path that the Turkish economy had followed between 2000 and 2006. The estimated parameters also suggest that overvalued domestic currency, current account deficit and the increase in the default risk increase the likelihood of having an economic crisis in the economy. Overall, the findings in this paper suggest that the estimation methodology introduced in this paper can also be applied to other emerging market economies as well.
Transfer Entropy as a Log-Likelihood Ratio
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Transfer entropy as a log-likelihood ratio.
Barnett, Lionel; Bossomaier, Terry
2012-09-28
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
A penalized likelihood approach for mixture cure models.
Corbière, Fabien; Commenges, Daniel; Taylor, Jeremy M G; Joly, Pierre
2009-02-01
Cure models have been developed to analyze failure time data with a cured fraction. For such data, standard survival models are usually not appropriate because they do not account for the possibility of cure. Mixture cure models assume that the studied population is a mixture of susceptible individuals, who may experience the event of interest, and non-susceptible individuals that will never experience it. Important issues in mixture cure models are estimation of the baseline survival function for susceptibles and estimation of the variance of the regression parameters. The aim of this paper is to propose a penalized likelihood approach, which allows for flexible modeling of the hazard function for susceptible individuals using M-splines. This approach also permits direct computation of the variance of parameters using the inverse of the Hessian matrix. Properties and limitations of the proposed method are discussed and an illustration from a cancer study is presented.
Ellis, J. A.; Siemens, X.; Van Haasteren, R.
2013-05-20
Direct detection of gravitational waves by pulsar timing arrays will become feasible over the next few years. In the low frequency regime (10{sup -7} Hz-10{sup -9} Hz), we expect that a superposition of gravitational waves from many sources will manifest itself as an isotropic stochastic gravitational wave background. Currently, a number of techniques exist to detect such a signal; however, many detection methods are computationally challenging. Here we introduce an approximation to the full likelihood function for a pulsar timing array that results in computational savings proportional to the square of the number of pulsars in the array. Through a series of simulations we show that the approximate likelihood function reproduces results obtained from the full likelihood function. We further show, both analytically and through simulations, that, on average, this approximate likelihood function gives unbiased parameter estimates for astrophysically realistic stochastic background amplitudes.
Impedance rheoplethysmography. The role of estimation of vasodilatory activity.
Demenge, P; Silice, C; Lebas, J F; Piquard, J F; Carraz, G
1979-01-01
The activity of a number of vasodilatory drugs was studied, with the help of impedance rheoplethysmography, on the vascular bed of the hind limb of anaesthetized rabbits. The vasodilators under study induce changes in rheoplethysmogram to a more or less important degree. The results were compared with those obtained with electromagnetic flowmetry. This method seems to be useful in the study of vasodilators because it allows to measure their effects and the duration thereof in a non-aggressive way. This method using flowmetry, allows to study in an analytical way those substances' effects on artery, vein and also capillary.
Likelihood-free Bayesian computation for structural model calibration: a feasibility study
NASA Astrophysics Data System (ADS)
Jin, Seung-Seop; Jung, Hyung-Jo
2016-04-01
Finite element (FE) model updating is often used to associate FE models with corresponding existing structures for the condition assessment. FE model updating is an inverse problem and prone to be ill-posed and ill-conditioning when there are many errors and uncertainties in both an FE model and its corresponding measurements. In this case, it is important to quantify these uncertainties properly. Bayesian FE model updating is one of the well-known methods to quantify parameter uncertainty by updating our prior belief on the parameters with the available measurements. In Bayesian inference, likelihood plays a central role in summarizing the overall residuals between model predictions and corresponding measurements. Therefore, likelihood should be carefully chosen to reflect the characteristics of the residuals. It is generally known that very little or no information is available regarding the statistical characteristics of the residuals. In most cases, the likelihood is assumed to be the independent identically distributed Gaussian distribution with the zero mean and constant variance. However, this assumption may cause biased and over/underestimated estimates of parameters, so that the uncertainty quantification and prediction are questionable. To alleviate the potential misuse of the inadequate likelihood, this study introduced approximate Bayesian computation (i.e., likelihood-free Bayesian inference), which relaxes the need for an explicit likelihood by analyzing the behavior similarities between model predictions and measurements. We performed FE model updating based on likelihood-free Markov chain Monte Carlo (MCMC) without using the likelihood. Based on the result of the numerical study, we observed that the likelihood-free Bayesian computation can quantify the updating parameters correctly and its predictive capability for the measurements, not used in calibrated, is also secured.
Methodology for a bounding estimate of activation source-term.
Culp, Todd
2013-02-01
Sandia National Laboratories' Z-Machine is the world's most powerful electrical device, and experiments have been conducted that make it the world's most powerful radiation source. Because Z-Machine is used for research, an assortment of materials can be placed into the machine; these materials can be subjected to a range of nuclear reactions, producing an assortment of activation products. A methodology was developed to provide a systematic approach to evaluate different materials to be introduced into the machine as wire arrays. This methodology is based on experiment specific characteristics, physical characteristics of specific radionuclides, and experience with Z-Machine. This provides a starting point for bounding calculations of radionuclide source-term that can be used for work planning, development of work controls, and evaluating materials for introduction into the machine.
[Ventricular activation sequence estimated by body surface isochrone map].
Hayashi, H; Ishikawa, T; Takami, K; Kojima, H; Yabe, S; Ohsugi, S; Miyachi, K; Sotobata, I
1985-06-01
This study was performed to evaluate the usefulness of the body surface isochrone map (VAT map) for identifying the ventricular activation sequence, and it was correlated with the isopotential map. Subjects consisted of 42 normal healthy adults, 18 patients with artificial ventricular pacemakers, and 100 patients with ventricular premature beats (VPB). The sites of pacemaker implantations were the right ventricular endocardial apex (nine cases), right ventricular epicardial apex (five cases), right ventricular inflow tract (one case), left ventricular epicardial apex (one case), and posterior base of the left ventricle via the coronary sinus (two cases). An isopotential map was recorded by the mapper HPM-6500 (Chunichi-Denshi Co.) on the basis of an 87 unipolar lead ECG, and a VAT isochrone map was drawn by a minicomputer. The normal VAT map was classified by type according to alignment of isochrone lines, and their frequency was 57.1% for type A, 16.7% for type B, and 26.2% for type C. In the VAT map of ventricular pacing, the body surface area of initial isochrone lines represented well the sites of pacemaker stimuli. In the VAT map of VPB, the sites of origin of VPB agreed well with those as determined by the previous study using an isopotential map. The density of the isochrone lines suggested the mode of conduction via the specialized conduction system or ventricular muscle. The VAT map is a very useful diagnostic method to predict the ventricular activation sequence more directly in a single sheet of the map. PMID:2419457
Estimation of Exercise Intensity in “Exercise and Physical Activity Reference for Health Promotion”
NASA Astrophysics Data System (ADS)
Ohkubo, Tomoyuki; Kurihara, Yosuke; Kobayashi, Kazuyuki; Watanabe, Kajiro
To maintain or promote the health condition of elderly citizens is quite important for Japan. Given the circumstances, the Ministry of Health, Labour and Welfare has established the standards for the activities and exercises for promoting the health, and quantitatively determined the exercise intensity on 107 items of activities. This exercise intensity, however, requires recording the type and the duration of the activity to be calculated. In this paper, the exercise intensities are estimated using 3D accelerometer for 25 daily activities. As the result, the exercise intensities were estimated to be within the root mean square error of 0.83 METs for all 25 activities.
ERIC Educational Resources Information Center
Schembre, Susan M.; Riebe, Deborah A.
2011-01-01
Non-exercise equations developed from self-reported physical activity can estimate maximal oxygen uptake (VO[subscript 2]max) as well as sub-maximal exercise testing. The International Physical Activity Questionnaire is the most widely used and validated self-report measure of physical activity. This study aimed to develop and test a VO[subscript…
Comparing Participants' Rating and Compendium Coding to Estimate Physical Activity Intensities
ERIC Educational Resources Information Center
Masse, Louise C.; Eason, Karen E.; Tortolero, Susan R.; Kelder, Steven H.
2005-01-01
This study assessed agreement between participants' rating (PMET) and compendium coding (CMET) of estimating physical activity intensity in a population of older minority women. As part of the Women on the Move study, 224 women completed a 7-day activity diary and wore an accelerometer for 7 days. All activities recorded were coded using PMET and…
Estimating Toxicity Pathway Activating Doses for High Throughput Chemical Risk Assessments
Estimating a Toxicity Pathway Activating Dose (TPAD) from in vitro assays as an analog to a reference dose (RfD) derived from in vivo toxicity tests would facilitate high throughput risk assessments of thousands of data-poor environmental chemicals. Estimating a TPAD requires def...
Eliciting information from experts on the likelihood of rapid climate change.
Arnell, Nigel W; Tompkins, Emma L; Adger, W Neil
2005-12-01
The threat of so-called rapid or abrupt climate change has generated considerable public interest because of its potentially significant impacts. The collapse of the North Atlantic Thermohaline Circulation or the West Antarctic Ice Sheet, for example, would have potentially catastrophic effects on temperatures and sea level, respectively. But how likely are such extreme climatic changes? Is it possible actually to estimate likelihoods? This article reviews the societal demand for the likelihoods of rapid or abrupt climate change, and different methods for estimating likelihoods: past experience, model simulation, or through the elicitation of expert judgments. The article describes a survey to estimate the likelihoods of two characterizations of rapid climate change, and explores the issues associated with such surveys and the value of information produced. The surveys were based on key scientists chosen for their expertise in the climate science of abrupt climate change. Most survey respondents ascribed low likelihoods to rapid climate change, due either to the collapse of the Thermohaline Circulation or increased positive feedbacks. In each case one assessment was an order of magnitude higher than the others. We explore a high rate of refusal to participate in this expert survey: many scientists prefer to rely on output from future climate model simulations.
Eliciting information from experts on the likelihood of rapid climate change.
Arnell, Nigel W; Tompkins, Emma L; Adger, W Neil
2005-12-01
The threat of so-called rapid or abrupt climate change has generated considerable public interest because of its potentially significant impacts. The collapse of the North Atlantic Thermohaline Circulation or the West Antarctic Ice Sheet, for example, would have potentially catastrophic effects on temperatures and sea level, respectively. But how likely are such extreme climatic changes? Is it possible actually to estimate likelihoods? This article reviews the societal demand for the likelihoods of rapid or abrupt climate change, and different methods for estimating likelihoods: past experience, model simulation, or through the elicitation of expert judgments. The article describes a survey to estimate the likelihoods of two characterizations of rapid climate change, and explores the issues associated with such surveys and the value of information produced. The surveys were based on key scientists chosen for their expertise in the climate science of abrupt climate change. Most survey respondents ascribed low likelihoods to rapid climate change, due either to the collapse of the Thermohaline Circulation or increased positive feedbacks. In each case one assessment was an order of magnitude higher than the others. We explore a high rate of refusal to participate in this expert survey: many scientists prefer to rely on output from future climate model simulations. PMID:16506972
Intelligence's likelihood and evolutionary time frame
NASA Astrophysics Data System (ADS)
Bogonovich, Marc
2011-04-01
This paper outlines hypotheses relevant to the evolution of intelligent life and encephalization in the Phanerozoic. If general principles are inferable from patterns of Earth life, implications could be drawn for astrobiology. Many of the outlined hypotheses, relevant data, and associated evolutionary and ecological theory are not frequently cited in astrobiological journals. Thus opportunity exists to evaluate reviewed hypotheses with an astrobiological perspective. A quantitative method is presented for testing one of the reviewed hypotheses (hypothesis i; the diffusion hypothesis). Questions are presented throughout, which illustrate that the question of intelligent life's likelihood can be expressed as multiple, broadly ranging, more tractable questions.
Lessler, Justin; Metcalf, C. Jessica E.; Grais, Rebecca F.; Luquero, Francisco J.; Cummings, Derek A. T.; Grenfell, Bryan T.
2011-01-01
Background The performance of routine and supplemental immunization activities is usually measured by the administrative method: dividing the number of doses distributed by the size of the target population. This method leads to coverage estimates that are sometimes impossible (e.g., vaccination of 102% of the target population), and are generally inconsistent with the proportion found to be vaccinated in Demographic and Health Surveys (DHS). We describe a method that estimates the fraction of the population accessible to vaccination activities, as well as within-campaign inefficiencies, thus providing a consistent estimate of vaccination coverage. Methods and Findings We developed a likelihood framework for estimating the effective coverage of vaccination programs using cross-sectional surveys of vaccine coverage combined with administrative data. We applied our method to measles vaccination in three African countries: Ghana, Madagascar, and Sierra Leone, using data from each country's most recent DHS survey and administrative coverage data reported to the World Health Organization. We estimate that 93% (95% CI: 91, 94) of the population in Ghana was ever covered by any measles vaccination activity, 77% (95% CI: 78, 81) in Madagascar, and 69% (95% CI: 67, 70) in Sierra Leone. “Within-activity” inefficiencies were estimated to be low in Ghana, and higher in Sierra Leone and Madagascar. Our model successfully fits age-specific vaccination coverage levels seen in DHS data, which differ markedly from those predicted by naïve extrapolation from country-reported and World Health Organization–adjusted vaccination coverage. Conclusions Combining administrative data with survey data substantially improves estimates of vaccination coverage. Estimates of the inefficiency of past vaccination activities and the proportion not covered by any activity allow us to more accurately predict the results of future activities and provide insight into the ways in which
García-Pastor, Andrés; Díaz-Otero, Fernando; Funes-Molina, Carmen; Benito-Conde, Beatriz; Grandes-Velasco, Sandra; Sobrino-García, Pilar; Vázquez-Alén, Pilar; Fernández-Bullido, Yolanda; Villanueva-Osorio, Jose Antonio; Gil-Núñez, Antonio
2015-10-01
A dose of 0.9 mg/kg of intravenous tissue plasminogen activator (t-PA) has proven to be beneficial in the treatment of acute ischemic stroke (AIS). Dosing of t-PA based on estimated patient weight (PW) increases the likelihood of errors. Our objectives were to evaluate the accuracy of estimated PW and assess the effectiveness and safety of the actual applied dose (AAD) of t-PA. We performed a prospective single-center study of AIS patients treated with t-PA from May 2010 to December 2011. Dose was calculated according to estimated PW. Patients were weighed during the 24 h following treatment with t-PA. Estimation errors and AAD were calculated. Actual PW was measured in 97 of the 108 included patients. PW estimation errors were recorded in 22.7 % and were more frequent when weight was estimated by stroke unit staff (44 %). Only 11 % of patients misreported their own weight. Mean AAD was significantly higher in patients who had intracerebral hemorrhage (ICH) after t-PA than in patients who did not (0.96 vs. 0.92 mg/kg; p = 0.02). Multivariate analysis showed an increased risk of ICH for each 10 % increase in t-PA dose above the optimal dose of 0.90 mg/kg (OR 3.10; 95 % CI 1.14-8.39; p = 0.026). No effects of t-PA misdosing were observed on symptomatic ICH, functional outcome or mortality. Estimated PW is frequently inaccurate and leads to t-PA dosing errors. Increasing doses of t-PA above 0.90 mg/kg may increase the risk of ICH. Standardized weighing methods before t-PA is administered should be considered.
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses. PMID:21138291
Assessing allelic dropout and genotype reliability using maximum likelihood.
Miller, Craig R; Joyce, Paul; Waits, Lisette P
2002-01-01
A growing number of population genetic studies utilize nuclear DNA microsatellite data from museum specimens and noninvasive sources. Genotyping errors are elevated in these low quantity DNA sources, potentially compromising the power and accuracy of the data. The most conservative method for addressing this problem is effective, but requires extensive replication of individual genotypes. In search of a more efficient method, we developed a maximum-likelihood approach that minimizes errors by estimating genotype reliability and strategically directing replication at loci most likely to harbor errors. The model assumes that false and contaminant alleles can be removed from the dataset and that the allelic dropout rate is even across loci. Simulations demonstrate that the proposed method marks a vast improvement in efficiency while maintaining accuracy. When allelic dropout rates are low (0-30%), the reduction in the number of PCR replicates is typically 40-50%. The model is robust to moderate violations of the even dropout rate assumption. For datasets that contain false and contaminant alleles, a replication strategy is proposed. Our current model addresses only allelic dropout, the most prevalent source of genotyping error. However, the developed likelihood framework can incorporate additional error-generating processes as they become more clearly understood. PMID:11805071
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.
CORA: Emission Line Fitting with Maximum Likelihood
NASA Astrophysics Data System (ADS)
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
CORA - emission line fitting with Maximum Likelihood
NASA Astrophysics Data System (ADS)
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.
Simulated likelihood methods for complex double-platform line transect surveys.
Schweder, T; Skaug, H J; Langaas, M; Dimakos, X K
1999-09-01
The conventional line transect approach of estimating effective search width from the perpendicular distance distribution is inappropriate in certain types of surveys, e.g., when an unknown fraction of the animals on the track line is detected, the animals can be observed only at discrete points in time, there are errors in positional measurements, and covariate heterogeneity exists in detectability. For such situations a hazard probability framework for independent observer surveys is developed. The likelihood of the data, including observed positions of both initial and subsequent observations of animals, is established under the assumption of no measurement errors. To account for measurement errors and possibly other complexities, this likelihood is modified by a function estimated from extensive simulations. This general method of simulated likelihood is explained and the methodology applied to data from a double-platform survey of minke whales in the northeastern Atlantic in 1995. PMID:11314993
Augmented composite likelihood for copula modeling in family studies under biased sampling.
Zhong, Yujie; Cook, Richard J
2016-07-01
The heritability of chronic diseases can be effectively studied by examining the nature and extent of within-family associations in disease onset times. Families are typically accrued through a biased sampling scheme in which affected individuals are identified and sampled along with their relatives who may provide right-censored or current status data on their disease onset times. We develop likelihood and composite likelihood methods for modeling the within-family association in these times through copula models in which dependencies are characterized by Kendall's [Formula: see text] Auxiliary data from independent individuals are exploited by augmentating composite likelihoods to increase precision of marginal parameter estimates and consequently increase efficiency in dependence parameter estimation. An application to a motivating family study in psoriatic arthritis illustrates the method and provides some evidence of excessive paternal transmission of risk. PMID:26819481
Measuring slope to improve energy expenditure estimates during field-based activities
Duncan, Glen E.; Lester, Jonathan; Migotsky, Sean; Higgins, Lisa; Borriello, Gaetano
2013-01-01
This technical note describes methods to improve activity energy expenditure estimates using a multi-sensor board (MSB) by measuring slope. Ten adults walked over a 2.5-mile course wearing an MSB and mobile calorimeter. Energy expenditure was estimated using accelerometry alone (base) and four methods to measure slope. The barometer and GPS methods improved accuracy 11% from the base (Ps < 0.05) to 86% overall. Measuring slope using the MSB improves energy expenditure estimates during field-based activities. PMID:23537030
Developmental Changes in Children's Understanding of Future Likelihood and Uncertainty
ERIC Educational Resources Information Center
Lagattuta, Kristin Hansen; Sayfan, Liat
2011-01-01
Two measures assessed 4-10-year-olds' and adults' (N = 201) understanding of future likelihood and uncertainty. In one task, participants sequenced sets of event pictures varying by one physical dimension according to increasing future likelihood. In a separate task, participants rated characters' thoughts about the likelihood of future events,…
Simčič, Tatjana; Pajk, Franja; Jaklič, Martina; Brancelj, Anton; Vrezec, Al
2014-04-01
Whether electron transport system (ETS) activity could be used as an estimator of crayfish thermal tolerance has been investigated experimentally. Food consumption rate, respiration rates in the air and water, the difference between energy consumption and respiration costs at a given temperature ('potential growth scope', PGS), and ETS activity of Orconectes limosus and Pacifastacus leniusculus were determined over a temperature range of 5-30°C. All concerned parameters were found to be temperature dependent. The significant correlation between ETS activity and PGS indicates that they respond similarly to temperature change. The regression analysis of ETS activity as an estimator of thermal tolerance at the mitochondrial level and PGS as an indicator of thermal tolerance at the organismic level showed the shift of optimum temperature ranges of ETS activity to the right for 2° in O. limosus and for 3° in P. leniusculus. Thus, lower estimated temperature optima and temperatures of optimum ranges of PGS compared to ETS activity could indicate higher thermal sensitivity at the organismic level than at a lower level of complexity (i.e. at the mitochondrial level). The response of ETS activity to temperature change, especially at lower and higher temperatures, indicates differences in the characteristics of the ETSs in O. limosus and P. leniusculus. O. limosus is less sensitive to high temperature. The significant correlation between PGS and ETS activity supports our assumption that ETS activity could be used for the rapid estimation of thermal tolerance in crayfish species. PMID:24679968
Inverse estimation of multiple muscle activations from joint moment with muscle synergy extraction.
Li, Zhan; Guiraud, David; Hayashibe, Mitsuhiro
2015-01-01
Human movement is produced resulting from synergetic combinations of multiple muscle contractions. The resultant joint movement can be estimated through the related multiple-muscle activities, which is formulated as the forward problem. Neuroprosthetic applications may benefit from cocontraction of agonist and antagonist muscle pairs to achieve more stable and robust joint movements. It is necessary to estimate the activation of each individual muscle from desired joint torque(s), which is the inverse problem. A synergy-based solution is presented for the inverse estimation of multiple muscle activations from joint movement, focusing on one degree-of-freedom tasks. The approach comprises muscle synergy extraction via the nonnegative matrix factorization algorithm. Cross validation is performed to evaluate the method for prediction accuracy based on experimental data from ten able-bodied subjects. The results demonstrate that the approach succeeds to inversely estimate the multiple muscle activities from the given joint torque sequence. In addition, the other one's averaged synergy ratio was applied for muscle activation estimation with leave-one-out cross-validation manner, which resulted in 9.3% estimation error over all the subjects. The obtained results support the common muscle synergy-based neuroprosthetics control concept.
Planck 2013 results. XV. CMB power spectra and likelihood
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Leach, S.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Orieux, F.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, ℓ, covering 2 ≤ ℓ ≤ 2500. The main source of uncertainty at ℓ ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ℓs. For ℓ < 50, our likelihood exploits all Planck frequency channels from 30 to 353 GHz, separating the cosmological CMB signal from diffuse Galactic foregrounds through a physically motivated Bayesian component separation technique. At ℓ ≥ 50, we employ a correlated Gaussian likelihood approximation based on a fine-grained set of angular cross-spectra derived from multiple detector combinations between the 100, 143, and 217 GHz frequency channels, marginalising over power spectrum foreground templates. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-ℓ cross-spectra with residuals below a few μK2 at ℓ ≲ 1000, in agreement with estimated calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at ℓ ≲ 1500. One specific example is the
González, Juan M.
1999-01-01
Unlike the fraction of active bacterioplankton, the fraction of active bacterivores (i.e., those involved in grazing) during a specified time period has not been studied yet. Fractions of protists actively involved in bacterivory were estimated assuming that the distributions of bacteria and fluorescently labeled bacteria (FLB) ingested by protists follow Poisson distributions. Estimates were compared with experimental data obtained from FLB uptake experiments. The percentages of protists with ingested FLB (experimental) and the estimates obtained from Poisson distributions were similar for both flagellates and ciliates. Thus, the fraction of protists actively grazing on natural bacteria during a given time period could be estimated. The fraction of protists with ingested bacteria depends on the incubation time and reaches a saturating value. Aquatic systems with very different characteristics were analyzed; estimates of the fraction of protists actively grazing on bacteria ranged from 7 to 100% in the studied samples. Some nanoflagellates appeared to be grazing on specific bacterial sizes. Evidence indicated that there was no discrimination for or against bacterial surrogates (i.e., FLB); also, bacteria were randomly encountered by bacterivorous protists during these short-term uptake experiments. These analyses made it possible to estimate the ingestion rates from FLB uptake experiments by counting the number of flagellates containing ingested FLB. These results represent the first reported estimates of active bacterivores in natural aquatic systems; also, a proposed protocol for estimating in situ ingestion rates by protists represents a significant improvement and simplification to the current protocol and avoids the tedious work of counting the number of ingested FLB per protist. PMID:10103238
Gonzalez
1999-04-01
Unlike the fraction of active bacterioplankton, the fraction of active bacterivores (i.e., those involved in grazing) during a specified time period has not been studied yet. Fractions of protists actively involved in bacterivory were estimated assuming that the distributions of bacteria and fluorescently labeled bacteria (FLB) ingested by protists follow Poisson distributions. Estimates were compared with experimental data obtained from FLB uptake experiments. The percentages of protists with ingested FLB (experimental) and the estimates obtained from Poisson distributions were similar for both flagellates and ciliates. Thus, the fraction of protists actively grazing on natural bacteria during a given time period could be estimated. The fraction of protists with ingested bacteria depends on the incubation time and reaches a saturating value. Aquatic systems with very different characteristics were analyzed; estimates of the fraction of protists actively grazing on bacteria ranged from 7 to 100% in the studied samples. Some nanoflagellates appeared to be grazing on specific bacterial sizes. Evidence indicated that there was no discrimination for or against bacterial surrogates (i.e., FLB); also, bacteria were randomly encountered by bacterivorous protists during these short-term uptake experiments. These analyses made it possible to estimate the ingestion rates from FLB uptake experiments by counting the number of flagellates containing ingested FLB. These results represent the first reported estimates of active bacterivores in natural aquatic systems; also, a proposed protocol for estimating in situ ingestion rates by protists represents a significant improvement and simplification to the current protocol and avoids the tedious work of counting the number of ingested FLB per protist.
Maximum likelihood dipole fitting in spatially colored noise.
Baryshnikov, B V; Van Veen, B D; Wakai, R T
2004-11-30
We evaluated a maximum likelihood dipole-fitting algorithm for somatosensory evoked field (SEF) MEG data in the presence of spatially colored noise. The method exploits the temporal multiepoch structure of the evoked response data to estimate the spatial noise covariance matrix from the section of data being fit, which eliminates the stationarity assumption implicit in prestimulus based whitening approaches. The performance of the method, including its effectiveness in comparison to other localization techniques (dipole fitting, LCMV and MUSIC) was evaluated using the bootstrap technique. Synthetic data results demonstrated robustness of the algorithm in the presence of relatively high levels of noise when traditional dipole fitting algorithms fail. Application of the algorithm to adult somatosensory MEG data showed that while it is not advantageous for high SNR data, it definitely provides improved performance (measured by the spread of localizations) as the data sample size decreases.
Maximum likelihood identification of aircraft parameters with unsteady aerodynamic modelling
NASA Technical Reports Server (NTRS)
Keskar, D. A.; Wells, W. R.
1979-01-01
A simplified aerodynamic force model based on the physical principle of Prandtl's lifting line theory and trailing vortex concept has been developed to account for unsteady aerodynamic effects in aircraft dynamics. Longitudinal equations of motion have been modified to include these effects. The presence of convolution integrals in the modified equations of motion led to a frequency domain analysis utilizing Fourier transforms. This reduces the integro-differential equations to relatively simple algebraic equations, thereby reducing computation time significantly. A parameter extraction program based on the maximum likelihood estimation technique is developed in the frequency domain. The extraction algorithm contains a new scheme for obtaining sensitivity functions by using numerical differentiation. The paper concludes with examples using computer generated and real flight data
Feng, Feng; Kepler, Thomas B
2015-01-01
Surface plasmon resonance (SPR) has previously been employed to measure the active concentration of analyte in addition to the kinetic rate constants in molecular binding reactions. Those approaches, however, have a few restrictions. In this work, a Bayesian approach is developed to determine both active concentration and affinity constants using SPR technology. With the appropriate prior probabilities on the parameters and a derived likelihood function, a Markov Chain Monte Carlo (MCMC) algorithm is applied to compute the posterior probability densities of both the active concentration and kinetic rate constants based on the collected SPR data. Compared with previous approaches, ours exploits information from the duration of the process in its entirety, including both association and dissociation phases, under partial mass transport conditions; do not depend on calibration data; multiple injections of analyte at varying flow rates are not necessary. Finally the method is validated by analyzing both simulated and experimental datasets. A software package implementing our approach is developed with a user-friendly interface and made freely available. PMID:26098764
Modelling default and likelihood reasoning as probabilistic
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.
Groups, information theory, and Einstein's likelihood principle
NASA Astrophysics Data System (ADS)
Sicuro, Gabriele; Tempesta, Piergiulio
2016-04-01
We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.
Groups, information theory, and Einstein's likelihood principle.
Sicuro, Gabriele; Tempesta, Piergiulio
2016-04-01
We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.
Groups, information theory, and Einstein's likelihood principle.
Sicuro, Gabriele; Tempesta, Piergiulio
2016-04-01
We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts. PMID:27176234
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
Likelihood of Suicidality at Varying Levels of Depression Severity: A Re-Analysis of NESARC Data
ERIC Educational Resources Information Center
Uebelacker, Lisa A.; Strong, David; Weinstock, Lauren M.; Miller, Ivan W.
2010-01-01
Although it is clear that increasing depression severity is associated with more risk for suicidality, less is known about at what levels of depression severity the risk for different suicide symptoms increases. We used item response theory to estimate the likelihood of endorsing suicide symptoms across levels of depression severity in an…
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
ERIC Educational Resources Information Center
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
ERIC Educational Resources Information Center
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
Improving consumption rate estimates by incorporating wild activity into a bioenergetics model.
Brodie, Stephanie; Taylor, Matthew D; Smith, James A; Suthers, Iain M; Gray, Charles A; Payne, Nicholas L
2016-04-01
Consumption is the basis of metabolic and trophic ecology and is used to assess an animal's trophic impact. The contribution of activity to an animal's energy budget is an important parameter when estimating consumption, yet activity is usually measured in captive animals. Developments in telemetry have allowed the energetic costs of activity to be measured for wild animals; however, wild activity is seldom incorporated into estimates of consumption rates. We calculated the consumption rate of a free-ranging marine predator (yellowtail kingfish, Seriola lalandi) by integrating the energetic cost of free-ranging activity into a bioenergetics model. Accelerometry transmitters were used in conjunction with laboratory respirometry trials to estimate kingfish active metabolic rate in the wild. These field-derived consumption rate estimates were compared with those estimated by two traditional bioenergetics methods. The first method derived routine swimming speed from fish morphology as an index of activity (a "morphometric" method), and the second considered activity as a fixed proportion of standard metabolic rate (a "physiological" method). The mean consumption rate for free-ranging kingfish measured by accelerometry was 152 J·g(-1)·day(-1), which lay between the estimates from the morphometric method (μ = 134 J·g(-1)·day(-1)) and the physiological method (μ = 181 J·g(-1)·day(-1)). Incorporating field-derived activity values resulted in the smallest variance in log-normally distributed consumption rates (σ = 0.31), compared with the morphometric (σ = 0.57) and physiological (σ = 0.78) methods. Incorporating field-derived activity into bioenergetics models probably provided more realistic estimates of consumption rate compared with the traditional methods, which may further our understanding of trophic interactions that underpin ecosystem-based fisheries management. The general methods used to estimate active metabolic rates of free-ranging fish
Tracking of EEG activity using motion estimation to understand brain wiring.
Nisar, Humaira; Malik, Aamir Saeed; Ullah, Rafi; Shim, Seong-O; Bawakid, Abdullah; Khan, Muhammad Burhan; Subhani, Ahmad Rauf
2015-01-01
The fundamental step in brain research deals with recording electroencephalogram (EEG) signals and then investigating the recorded signals quantitatively. Topographic EEG (visual spatial representation of EEG signal) is commonly referred to as brain topomaps or brain EEG maps. In this chapter, full search full search block motion estimation algorithm has been employed to track the brain activity in brain topomaps to understand the mechanism of brain wiring. The behavior of EEG topomaps is examined throughout a particular brain activation with respect to time. Motion vectors are used to track the brain activation over the scalp during the activation period. Using motion estimation it is possible to track the path from the starting point of activation to the final point of activation. Thus it is possible to track the path of a signal across various lobes.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
Analysis of neighborhood dynamics of forest ecosystems using likelihood methods and modeling.
Canham, Charles D; Uriarte, María
2006-02-01
Advances in computing power in the past 20 years have led to a proliferation of spatially explicit, individual-based models of population and ecosystem dynamics. In forest ecosystems, the individual-based models encapsulate an emerging theory of "neighborhood" dynamics, in which fine-scale spatial interactions regulate the demography of component tree species. The spatial distribution of component species, in turn, regulates spatial variation in a whole host of community and ecosystem properties, with subsequent feedbacks on component species. The development of these models has been facilitated by development of new methods of analysis of field data, in which critical demographic rates and ecosystem processes are analyzed in terms of the spatial distributions of neighboring trees and physical environmental factors. The analyses are based on likelihood methods and information theory, and they allow a tight linkage between the models and explicit parameterization of the models from field data. Maximum likelihood methods have a long history of use for point and interval estimation in statistics. In contrast, likelihood principles have only more gradually emerged in ecology as the foundation for an alternative to traditional hypothesis testing. The alternative framework stresses the process of identifying and selecting among competing models, or in the simplest case, among competing point estimates of a parameter of a model. There are four general steps involved in a likelihood analysis: (1) model specification, (2) parameter estimation using maximum likelihood methods, (3) model comparison, and (4) model evaluation. Our goal in this paper is to review recent developments in the use of likelihood methods and modeling for the analysis of neighborhood processes in forest ecosystems. We will focus on a single class of processes, seed dispersal and seedling dispersion, because recent papers provide compelling evidence of the potential power of the approach, and illustrate
Joint torque and angle estimation by using ultrasonic muscle activity sensor
NASA Astrophysics Data System (ADS)
Tsutsui, Yoichiro; Tanaka, Takayuki; Kaneko, Shun'ichi; Feng, Maria Q.
2005-12-01
We have proposed a brand-new noninvasive ultrasonic sensor for measuring muscle activities named as Ultrasonic Muscle Activity Sensor (UMS). In the previous paper, the authors achieved to accurately estimate joint torque by using UMS and electromyogram (EMG) which is one of the most popular sensors. This paper aims to realize to measure not only joint torque also joint angle by using UMS and EMG. In order to estimate torque and angle of a knee joint, muscle activities of quadriceps femoris and biceps femoris were measured by both UMS and EMG. These targeted muscles are related to contraction and extension of knee joint. Simultaneously, actual torque on the knee joint caused by these muscles was measured by using torque sensor. The knee joint angle was fixed by torque sensor in the experiment, therefore the measurement was in isometric state. In the result, we found that the estimated torque and angle have high correlation coefficient to actual torque and angle. This means that the sensor can be used for angle estimation as well torque estimation. Therefore, it is shown that the combined use of UMS and EMG is effective to torque and angle estimation.
Non-exercise estimation of VO2max using the International Physical Activity Questionnaire
Schembre, Susan M.; Riebe, Deborah A.
2011-01-01
Non-exercise equations developed from self-reported physical activity can estimate maximal oxygen uptake (VO2max) as well as submaximal exercise testing. The International Physical Activity Questionnaire (IPAQ) is the most widely used and validated self-report measure of physical activity. This study aimed to develop and test a VO2max estimation equation derived from the IPAQ-Short Form (IPAQ-S). College-aged males and females (n = 80) completed the IPAQ-S and performed a maximal exercise test. The estimation equation was created with multivariate regression in a gender-balanced subsample of participants, equally representing five levels of fitness (n = 50) and validated in the remaining participants (n = 30). The resulting equation explained 43% of the variance in measured VO2max (SEE = 5.45 ml·kg-1·min-1). Estimated VO2max for 87% of individuals fell within acceptable limits of error observed with submaximal exercise testing (20% error). The IPAQ-S can be used to successfully estimate VO2max as well as submaximal exercise tests. Development of other population-specific estimation equations is warranted. PMID:21927551
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
PHYML Online—a web server for fast maximum likelihood-based phylogenetic inference
Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier
2005-01-01
PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at . PMID:15980534
PHYML Online--a web server for fast maximum likelihood-based phylogenetic inference.
Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier
2005-07-01
PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at http://atgc.lirmm.fr/phyml.
Liu, Z; Liu, C; He, B
2006-01-01
This paper presents a novel electrocardiographic inverse approach for imaging the 3-D ventricular activation sequence based on the modeling and estimation of the equivalent current density throughout the entire myocardial volume. The spatio-temporal coherence of the ventricular excitation process is utilized to derive the activation time from the estimated time course of the equivalent current density. At each time instant during the period of ventricular activation, the distributed equivalent current density is noninvasively estimated from body surface potential maps (BSPM) using a weighted minimum norm approach with a spatio-temporal regularization strategy based on the singular value decomposition of the BSPMs. The activation time at any given location within the ventricular myocardium is determined as the time point with the maximum local current density estimate. Computer simulation has been performed to evaluate the capability of this approach to image the 3-D ventricular activation sequence initiated from a single pacing site in a physiologically realistic cellular automaton heart model. The simulation results demonstrate that the simulated "true" activation sequence can be accurately reconstructed with an average correlation coefficient of 0.90, relative error of 0.19, and the origin of ventricular excitation can be localized with an average localization error of 5.5 mm for 12 different pacing sites distributed throughout the ventricles.
Shielding and activity estimator for template-based nuclide identification methods
Nelson, Karl Einar
2013-04-09
According to one embodiment, a method for estimating an activity of one or more radio-nuclides includes receiving one or more templates, the one or more templates corresponding to one or more radio-nuclides which contribute to a probable solution, receiving one or more weighting factors, each weighting factor representing a contribution of one radio-nuclide to the probable solution, computing an effective areal density for each of the one more radio-nuclides, computing an effective atomic number (Z) for each of the one more radio-nuclides, computing an effective metric for each of the one or more radio-nuclides, and computing an estimated activity for each of the one or more radio-nuclides. In other embodiments, computer program products, systems, and other methods are presented for estimating an activity of one or more radio-nuclides.
NASA Astrophysics Data System (ADS)
Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran
2016-09-01
In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7
Heitmann, Ryan J; Mumford, Sunni L; Hill, Micah J; Armstrong, Alicia Y
2014-10-01
Unintended pregnancy is reportedly higher in active duty women; therefore, we sought to estimate the potential impact of the levonorgestrel-containing intrauterine system (LNG-IUS) could have on unintended pregnancy in active duty women. A decision tree model with sensitivity analysis was used to estimate the number of unintentional pregnancies in active duty women which could be prevented. A secondary cost analysis was performed to analyze the direct cost savings to the U.S. Government. The total number of Armed Services members is estimated to be over 1.3 million, with an estimated 208,146 being women. Assuming an age-standardized unintended pregnancy rate of 78 per 1,000 women, 16,235 unintended pregnancies occur each year. Using a combined LNG-IUS failure and expulsion rate of 2.2%, a decrease of 794, 1588, and 3970 unintended pregnancies was estimated to occur with 5%, 10% and 25% usage, respectively. Annual cost savings from LNG-IUS use range from $3,387,107 to $47,352,295 with 5% to 25% intrauterine device usage. One-way sensitivity analysis demonstrated LNG-IUS to be cost-effective when the cost associated with pregnancy and delivery exceeded $11,000. Use of LNG-IUS could result in significant reductions in unintended pregnancy among active duty women, resulting in substantial cost savings to the government health care system. PMID:25269131
Estimating the magnitude of near-membrane PDE4 activity in living cells.
Xin, Wenkuan; Feinstein, Wei P; Britain, Andrea L; Ochoa, Cristhiaan D; Zhu, Bing; Richter, Wito; Leavesley, Silas J; Rich, Thomas C
2015-09-15
Recent studies have demonstrated that functionally discrete pools of phosphodiesterase (PDE) activity regulate distinct cellular functions. While the importance of localized pools of enzyme activity has become apparent, few studies have estimated enzyme activity within discrete subcellular compartments. Here we present an approach to estimate near-membrane PDE activity. First, total PDE activity is measured using traditional PDE activity assays. Second, known cAMP concentrations are dialyzed into single cells and the spatial spread of cAMP is monitored using cyclic nucleotide-gated channels. Third, mathematical models are used to estimate the spatial distribution of PDE activity within cells. Using this three-tiered approach, we observed two pharmacologically distinct pools of PDE activity, a rolipram-sensitive pool and an 8-methoxymethyl IBMX (8MM-IBMX)-sensitive pool. We observed that the rolipram-sensitive PDE (PDE4) was primarily responsible for cAMP hydrolysis near the plasma membrane. Finally, we observed that PDE4 was capable of blunting cAMP levels near the plasma membrane even when 100 μM cAMP were introduced into the cell via a patch pipette. Two compartment models predict that PDE activity near the plasma membrane, near cyclic nucleotide-gated channels, was significantly lower than total cellular PDE activity and that a slow spatial spread of cAMP allowed PDE activity to effectively hydrolyze near-membrane cAMP. These results imply that cAMP levels near the plasma membrane are distinct from those in other subcellular compartments; PDE activity is not uniform within cells; and localized pools of AC and PDE activities are responsible for controlling cAMP levels within distinct subcellular compartments.
Estimation of dynamic time activity curves from dynamic cardiac SPECT imaging.
Hossain, J; Du, Y; Links, J; Rahmim, A; Karakatsanis, N; Akhbardeh, A; Lyons, J; Frey, E C
2015-04-21
Whole-heart coronary flow reserve (CFR) may be useful as an early predictor of cardiovascular disease or heart failure. Here we propose a simple method to extract the time-activity curve, an essential component needed for estimating the CFR, for a small number of compartments in the body, such as normal myocardium, blood pool, and ischemic myocardial regions, from SPECT data acquired with conventional cameras using slow rotation. We evaluated the method using a realistic simulation of (99m)Tc-teboroxime imaging. Uptake of (99m)Tc-teboroxime based on data from the literature were modeled. Data were simulated using the anatomically-realistic 3D NCAT phantom and an analytic projection code that realistically models attenuation, scatter, and the collimator-detector response. The proposed method was then applied to estimate time activity curves (TACs) for a set of 3D volumes of interest (VOIs) directly from the projections. We evaluated the accuracy and precision of estimated TACs and studied the effects of the presence of perfusion defects that were and were not modeled in the estimation procedure.The method produced good estimates of the myocardial and blood-pool TACS organ VOIs, with average weighted absolute biases of less than 5% for the myocardium and 10% for the blood pool when the true organ boundaries were known and the activity distributions in the organs were uniform. In the presence of unknown perfusion defects, the myocardial TAC was still estimated well (average weighted absolute bias <10%) when the total reduction in myocardial uptake (product of defect extent and severity) was ≤ 5%. This indicates that the method was robust to modest model mismatch such as the presence of moderate perfusion defects and uptake nonuniformities. With larger defects where the defect VOI was included in the estimation procedure, the estimated normal myocardial and defect TACs were accurate (average weighted absolute bias ≈ 5% for a defect with 25% extent and 100% severity).
Estimation of dynamic time activity curves from dynamic cardiac SPECT imaging
NASA Astrophysics Data System (ADS)
Hossain, J.; Du, Y.; Links, J.; Rahmim, A.; Karakatsanis, N.; Akhbardeh, A.; Lyons, J.; Frey, E. C.
2015-04-01
Whole-heart coronary flow reserve (CFR) may be useful as an early predictor of cardiovascular disease or heart failure. Here we propose a simple method to extract the time-activity curve, an essential component needed for estimating the CFR, for a small number of compartments in the body, such as normal myocardium, blood pool, and ischemic myocardial regions, from SPECT data acquired with conventional cameras using slow rotation. We evaluated the method using a realistic simulation of 99mTc-teboroxime imaging. Uptake of 99mTc-teboroxime based on data from the literature were modeled. Data were simulated using the anatomically-realistic 3D NCAT phantom and an analytic projection code that realistically models attenuation, scatter, and the collimator-detector response. The proposed method was then applied to estimate time activity curves (TACs) for a set of 3D volumes of interest (VOIs) directly from the projections. We evaluated the accuracy and precision of estimated TACs and studied the effects of the presence of perfusion defects that were and were not modeled in the estimation procedure. The method produced good estimates of the myocardial and blood-pool TACS organ VOIs, with average weighted absolute biases of less than 5% for the myocardium and 10% for the blood pool when the true organ boundaries were known and the activity distributions in the organs were uniform. In the presence of unknown perfusion defects, the myocardial TAC was still estimated well (average weighted absolute bias <10%) when the total reduction in myocardial uptake (product of defect extent and severity) was ≤5%. This indicates that the method was robust to modest model mismatch such as the presence of moderate perfusion defects and uptake nonuniformities. With larger defects where the defect VOI was included in the estimation procedure, the estimated normal myocardial and defect TACs were accurate (average weighted absolute bias ≈5% for a defect with 25% extent and 100% severity).
A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution.
Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840
A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution
Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840
A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution.
Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images.
The maximum likelihood dating of magnetostratigraphic sections
NASA Astrophysics Data System (ADS)
Man, Otakar
2011-04-01
In general, stratigraphic sections are dated by biostratigraphy and magnetic polarity stratigraphy (MPS) is subsequently used to improve the dating of specific section horizons or to correlate these horizons in different sections of similar age. This paper shows, however, that the identification of a record of a sufficient number of geomagnetic polarity reversals against a reference scale often does not require any complementary information. The deposition and possible subsequent erosion of the section is herein regarded as a stochastic process, whose discrete time increments are independent and normally distributed. This model enables the expression of the time dependence of the magnetic record of section increments in terms of probability. To date samples bracketing the geomagnetic polarity reversal horizons, their levels are combined with various sequences of successive polarity reversals drawn from the reference scale. Each particular combination gives rise to specific constraints on the unknown ages of the primary remanent magnetization of samples. The problem is solved by the constrained maximization of the likelihood function with respect to these ages and parameters of the model, and by subsequent maximization of this function over the set of possible combinations. A statistical test of the significance of this solution is given. The application of this algorithm to various published magnetostratigraphic sections that included nine or more polarity reversals gave satisfactory results. This possible self-sufficiency makes MPS less dependent on other dating techniques.
Disequilibrium mapping: Composite likelihood for pairwise disequilibrium
Devlin, B.; Roeder, K.; Risch, N.
1996-08-15
The pattern of linkage disequilibrium between a disease locus and a set of marker loci has been shown to be a useful tool for geneticists searching for disease genes. Several methods have been advanced to utilize the pairwise disequilibrium between the disease locus and each of a set of marker loci. However, none of the methods take into account the information from all pairs simultaneously while also modeling the variability in the disequilibrium values due to the evolutionary dynamics of the population. We propose a Composite Likelihood CL model that has these features when the physical distances between the marker loci are known or can be approximated. In this instance, and assuming that there is a single disease mutation, the CL model depends on only three parameters, the recombination fraction between the disease locus and an arbitrary marker locus, {theta}, the age of the mutation, and a variance parameter. When the CL is maximized over a grid of {theta}, it provides a graph that can direct the search for the disease locus. We also show how the CL model can be generalized to account for multiple disease mutations. Evolutionary simulations demonstrate the power of the analyses, as well as their potential weaknesses. Finally, we analyze the data from two mapped diseases, cystic fibrosis and diastrophic dysplasia, finding that the CL method performs well in both cases. 28 refs., 6 figs., 4 tabs.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.
Chor, Benny; Snir, Sagi
2004-12-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.
Georgoulis, Michael; Fragopoulou, Elisabeth; Kontogianni, Meropi D; Margariti, Aikaterini; Boulamatsi, Olga; Detopoulou, Paraskeui; Tiniakos, Dina; Zafiropoulou, Rodessa; Papatheodoridis, George
2015-01-01
It is well established that oxidative stress is implicated in nonalcoholic fatty liver disease pathogenesis, whereas the dietary intake of antioxidants has been reported to be low in patients with the disease. We hypothesized that blood redox status measurements would be associated with nonalcoholic fatty liver disease presence and severity, and that diet's total antioxidant capacity could moderate the aforementioned association. The study sample consisted of 73 patients with nonalcoholic fatty liver disease, of which 58 were matched by age, sex, and body mass index with 58 controls. Diet's total antioxidant capacity was estimated through the ferric-reducing antioxidant power, the total radical-trapping antioxidant parameter, and the Trolox equivalent antioxidant capacity scores, whereas blood redox status was assessed by measuring thiobarbituric acid reactive substances levels, the enzymatic activity of glutathione peroxidase, and serum resistance to oxidation. Diet's total antioxidant capacity scores and glutathione peroxidase activity were not significantly associated with the disease presence or severity. Both thiobarbituric acid reactive substances and serum resistance to oxidation were significantly associated with the likelihood of nonalcoholic fatty liver disease (odds ratios [ORs], 7.769 [P= .007] and 0.936 [P= .033], respectively), independently of abdominal fat level, degree of insulin resistance, blood lipid levels, markers of subclinical inflammation, and diet's total antioxidant capacity, but not with the disease histologic severity or stage. Our results support the association between blood redox status and the likelihood of nonalcoholic fatty liver disease regardless of diet's total antioxidant capacity.
Kück, Patrick; Mayer, Christoph; Wägele, Johann-Wolfgang; Misof, Bernhard
2012-01-01
The aim of our study was to test the robustness and efficiency of maximum likelihood with respect to different long branch effects on multiple-taxon trees. We simulated data of different alignment lengths under two different 11-taxon trees and a broad range of different branch length conditions. The data were analyzed with the true model parameters as well as with estimated and incorrect assumptions about among-site rate variation. If length differences between connected branches strongly increase, tree inference with the correct likelihood model assumptions can fail. We found that incorporating invariant sites together with Γ distributed site rates in the tree reconstruction (Γ+I) increases the robustness of maximum likelihood in comparison with models using only Γ. The results show that for some topologies and branch lengths the reconstruction success of maximum likelihood under the correct model is still low for alignments with a length of 100,000 base positions. Altogether, the high confidence that is put in maximum likelihood trees is not always justified under certain tree shapes even if alignment lengths reach 100,000 base positions.
Kobayashi, Ryota; Nishimaru, Hiroshi; Nishijo, Hisao
2016-10-29
The rhythmic activity of motoneurons (MNs) that underlies locomotion in mammals is generated by synaptic inputs from the locomotor network in the spinal cord. Thus, the quantitative estimation of excitatory and inhibitory synaptic conductances is essential to understand the mechanism by which the network generates the functional motor output. Conductance estimation is obtained from the voltage-current relationship measured by voltage-clamp- or current-clamp-recording with knowledge of the leak parameters of the recorded neuron. However, it is often difficult to obtain sufficient data to estimate synaptic conductances due to technical difficulties in electrophysiological experiments using in vivo or in vitro preparations. To address this problem, we estimated the average variations in excitatory and inhibitory synaptic conductance during a locomotion cycle from a single voltage trace without measuring the leak parameters. We found that the conductance variations can be accurately reconstructed from a voltage trace of 10 cycles by analyzing synthetic data generated from a computational model. Next, the conductance variations were estimated from mouse spinal MNs in vitro during drug-induced-locomotor-like activity. We found that the peak of excitatory conductance occurred during the depolarizing phase of the locomotor cycle, whereas the peak of inhibitory conductance occurred during the hyperpolarizing phase. These results suggest that the locomotor-like activity is generated by push-pull modulation via excitatory and inhibitory synaptic inputs. PMID:27561702
Video-Quality Estimation Based on Reduced-Reference Model Employing Activity-Difference
NASA Astrophysics Data System (ADS)
Yamada, Toru; Miyamoto, Yoshihiro; Senda, Yuzo; Serizawa, Masahiro
This paper presents a Reduced-reference based video-quality estimation method suitable for individual end-user quality monitoring of IPTV services. With the proposed method, the activity values for individual given-size pixel blocks of an original video are transmitted to end-user terminals. At the end-user terminals, the video quality of a received video is estimated on the basis of the activity-difference between the original video and the received video. Psychovisual weightings and video-quality score adjustments for fatal degradations are applied to improve estimation accuracy. In addition, low-bit-rate transmission is achieved by using temporal sub-sampling and by transmitting only the lower six bits of each activity value. The proposed method achieves accurate video quality estimation using only low-bit-rate original video information (15kbps for SDTV). The correlation coefficient between actual subjective video quality and estimated quality is 0.901 with 15kbps side information. The proposed method does not need computationally demanding spatial and gain-and-offset registrations. Therefore, it is suitable for real-time video-quality monitoring in IPTV services.
Estimation of specific activity of 177Lu by 'saturation assay' principle using DOTA as ligand.
Pillai, Ambikalmajan M R; Chakraborty, Sudipta; Das, Tapas
2015-01-01
Lutetium-177 is a widely used therapeutic radionuclide in targeted therapy and it is important to know its specific activity at the time of radiopharmaceutical preparation, especially for radiolabeling peptides. However, there are no direct methods for the experimental determination of the specific activity which can be readily applied in a hospital radiopharmacy. A new technique based on the 'saturation assay' principle using DOTA as the binding agent for the estimation of specific activity of (177)Lu is reported. The studies demonstrate the proof of principle of this new assay technique. The method is general and can be modified and applied for the estimation of specific activity of other metallic radionuclides by using DOTA or other suitable chelating agents.
The dud-alternative effect in likelihood judgment.
Windschitl, Paul D; Chambers, John R
2004-01-01
The judged likelihood of a focal outcome should generally decrease as the list of alternative possibilities increases. For example, the likelihood that a runner will win a race goes down when 2 new entries are added to the field. However, 6 experiments demonstrate that the presence of implausible alternatives (duds) often increases the judged likelihood of a focal outcome. This dud-alternative effect was detected for judgments involving uncertainty about trivia facts and stochastic events. Nonnumeric likelihood measures and betting measures reliably detected the effect, but numeric likelihood measures did not. Time pressure increased the magnitude of the effect. The results were consistent with a contrast-effect account: The inclusion of duds increases the perceived strength of the evidence for the focal outcome, thereby affecting its judged likelihood.
Estimating Am-241 activity in the body: comparison of direct measurements and radiochemical analyses
Lynch, Timothy P.; Tolmachev, Sergei Y.; James, Anthony C.
2009-06-01
The assessment of dose and ultimately the health risk from intakes of radioactive materials begins with estimating the amount actually taken into the body. An accurate estimate provides the basis to best assess the distribution in the body, the resulting dose, and ultimately the health risk. This study continues the time-honored practice of evaluating the accuracy of results obtained using in vivo measurement methods and techniques. Results from the radiochemical analyses of the 241Am activity content of tissues and organs from four donors to the United States Transuranium and Uranium Registries were compared to the results from direct measurements of radioactive material in the body performed in vivo and post mortem. Two were whole body donations and two were partial body donations The skeleton was the organ with the highest deposition of 241Am activity in all four cases. The activities ranged from 30 Bq to 300 Bq. The skeletal estimates obtained from measurements over the forehead were within 20% of the radiochemistry results in three cases and differed by 78% in one case. The 241Am lung activity estimates ranged from 1 Bq to 30 Bq in the four cases. The results from the direct measurements were within 40% of the radiochemistry results in 3 cases and within a factor of 3 for the other case. The direct measurement estimates of liver activity ranged from 2 Bq to 60 Bq and were generally lower than the radiochemistry results. The results from this study suggest that the measurement methods and calibration techniques used at the In Vivo Radiobioassay and Research Facility to quantify the activity in the lungs, skeleton and liver are reasonable under the most challenging conditions where there is 241Am activity in multiple organs. These methods and techniques are comparable to those used at other Department of Energy sites. This suggests that the current in vivo methods and calibration techniques provide reasonable estimates of radioactive material in the body. Not
Mishchenko, Yuriy
2016-10-01
We investigate the properties of recently proposed "shotgun" sampling approach for the common inputs problem in the functional estimation of neuronal connectivity. We study the asymptotic correctness, the speed of convergence, and the data size requirements of such an approach. We show that the shotgun approach can be expected to allow the inference of complete connectivity matrix in large neuronal populations under some rather general conditions. However, we find that the posterior error of the shotgun connectivity estimator grows quickly with the size of unobserved neuronal populations, the square of average connectivity strength, and the square of observation sparseness. This implies that the shotgun connectivity estimation will require significantly larger amounts of neuronal activity data whenever the number of neurons in observed neuronal populations remains small. We present a numerical approach for solving the shotgun estimation problem in general settings and use it to demonstrate the shotgun connectivity inference in the examples of simulated synfire and weakly coupled cortical neuronal networks. PMID:27515518
Mishchenko, Yuriy
2016-10-01
We investigate the properties of recently proposed "shotgun" sampling approach for the common inputs problem in the functional estimation of neuronal connectivity. We study the asymptotic correctness, the speed of convergence, and the data size requirements of such an approach. We show that the shotgun approach can be expected to allow the inference of complete connectivity matrix in large neuronal populations under some rather general conditions. However, we find that the posterior error of the shotgun connectivity estimator grows quickly with the size of unobserved neuronal populations, the square of average connectivity strength, and the square of observation sparseness. This implies that the shotgun connectivity estimation will require significantly larger amounts of neuronal activity data whenever the number of neurons in observed neuronal populations remains small. We present a numerical approach for solving the shotgun estimation problem in general settings and use it to demonstrate the shotgun connectivity inference in the examples of simulated synfire and weakly coupled cortical neuronal networks.
Estimating Physical Activity Energy Expenditure with the Kinect Sensor in an Exergaming Environment
Nathan, David; Huynh, Du Q.; Rubenson, Jonas; Rosenberg, Michael
2015-01-01
Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming
Estimating physical activity energy expenditure with the Kinect Sensor in an exergaming environment.
Nathan, David; Huynh, Du Q; Rubenson, Jonas; Rosenberg, Michael
2015-01-01
Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming
Estimating physical activity energy expenditure with the Kinect Sensor in an exergaming environment.
Nathan, David; Huynh, Du Q; Rubenson, Jonas; Rosenberg, Michael
2015-01-01
Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming
Dorsomedial prefrontal cortex activity predicts the accuracy in estimating others' preferences.
Kang, Pyungwon; Lee, Jongbin; Sul, Sunhae; Kim, Hackjin
2013-01-01
The ability to accurately estimate another person's preferences is crucial for a successful social life. In daily interactions, we often do this on the basis of minimal information. The aims of the present study were (a) to examine whether people can accurately judge others based only on a brief exposure to their appearances, and (b) to reveal the underlying neural mechanisms with functional magnetic resonance imaging (fMRI). Participants were asked to make guesses about unfamiliar target individuals' preferences for various items after looking at their faces for 3 s. The behavioral results showed that participants estimated others' preferences above chance level. The fMRI data revealed that higher accuracy in preference estimation was associated with greater activity in the dorsomedial prefrontal cortex (DMPFC) when participants were guessing the targets' preferences relative to thinking about their own preferences. These findings suggest that accurate estimations of others' preferences may require increased activity in the DMPFC. A functional connectivity analysis revealed that higher accuracy in preference estimation was related to increased functional connectivity between the DMPFC and the brain regions that are known to be involved in theory of mind processing, such as the temporoparietal junction (TPJ) and the posterior cingulate cortex (PCC)/precuneus, during correct vs. incorrect guessing trials. On the contrary, the tendency to refer to self-preferences when estimating others' preference was related to greater activity in the ventromedial prefrontal cortex. These findings imply that the DMPFC may be a core region in estimating the preferences of others and that higher accuracy may require stronger communication between the DMPFC and the TPJ and PCC/precuneus, part of a neural network known to be engaged in mentalizing.
NASA Technical Reports Server (NTRS)
Theis, S. W.; Blanchard, B. J.; Blanchard, A. J.
1984-01-01
Multisensor aircraft data were used to establish the potential of the active microwave sensor response to be used to compensate for roughness in the passive microwave sensor's response to soil moisture. Only bare fields were used. It is found that the L-band radiometer's capability to estimate soil moisture significantly improves when surface roughness is accounted for with the scatterometers.
NASA Technical Reports Server (NTRS)
Theis, S. W.; Blanchard, A. J.; Blanchard, B. J.
1986-01-01
Multisensor aircraft data were used to establish the potential of the active microwave sensor response to be used to compensate for roughness in the passive microwave sensor's response to soil moisture. Only bare fields were used. It is found that the L-band radiometer's capability to estimate soil moisture significantly improves when surface roughness is accounted for with the scatterometers.
The estimation of cortical activity for brain-computer interface: applications in a domotic context.
Babiloni, F; Cincotti, F; Marciani, M; Salinari, S; Astolfi, L; Tocci, A; Aloise, F; De Vico Fallani, F; Bufalari, S; Mattia, D
2007-01-01
In order to analyze whether the use of the cortical activity, estimated from noninvasive EEG recordings, could be useful to detect mental states related to the imagination of limb movements, we estimate cortical activity from high-resolution EEG recordings in a group of healthy subjects by using realistic head models. Such cortical activity was estimated in region of interest associated with the subject's Brodmann areas by using a depth-weighted minimum norm technique. Results showed that the use of the cortical-estimated activity instead of the unprocessed EEG improves the recognition of the mental states associated to the limb movement imagination in the group of normal subjects. The BCI methodology presented here has been used in a group of disabled patients in order to give them a suitable control of several electronic devices disposed in a three-room environment devoted to the neurorehabilitation. Four of six patients were able to control several electronic devices in this domotic context with the BCI system.
Technology Transfer Automated Retrieval System (TEKTRAN)
An Ensemble Kalman Filter-based data assimilation framework that links a crop growth model with active and passive (AP) microwave models was developed to improve estimates of soil moisture (SM) and vegetation biomass over a growing season of soybean. Complementarities in AP observations were incorpo...
The Estimation of Cortical Activity for Brain-Computer Interface: Applications in a Domotic Context
Babiloni, F.; Cincotti, F.; Marciani, M.; Salinari, S.; Astolfi, L.; Tocci, A.; Aloise, F.; Fallani, F. De Vico; Bufalari, S.; Mattia, D.
2007-01-01
In order to analyze whether the use of the cortical activity, estimated from noninvasive EEG recordings, could be useful to detect mental states related to the imagination of limb movements, we estimate cortical activity from high-resolution EEG recordings in a group of healthy subjects by using realistic head models. Such cortical activity was estimated in region of interest associated with the subject's Brodmann areas by using a depth-weighted minimum norm technique. Results showed that the use of the cortical-estimated activity instead of the unprocessed EEG improves the recognition of the mental states associated to the limb movement imagination in the group of normal subjects. The BCI methodology presented here has been used in a group of disabled patients in order to give them a suitable control of several electronic devices disposed in a three-room environment devoted to the neurorehabilitation. Four of six patients were able to control several electronic devices in this domotic context with the BCI system. PMID:18350134
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Estimating Activity and Sedentary Behavior From an Accelerometer on the Hip or Wrist
Rosenberger, Mary E.; Haskell, William L.; Albinali, Fahd; Mota, Selene; Nawyn, Jason; Intille, Stephen
2013-01-01
Previously the National Health and Examination Survey measured physical activity with an accelerometer worn on the hip for seven days, but recently changed the location of the monitor to the wrist. PURPOSE This study compared estimates of physical activity intensity and type with an accelerometer on the hip versus the wrist. METHODS Healthy adults (n=37) wore triaxial accelerometers (Wockets) on the hip and dominant wrist along with a portable metabolic unit to measure energy expenditure during 20 activities. Motion summary counts were created, then receiver operating characteristic (ROC) curves were used to determine sedentary and activity intensity thresholds. Ambulatory activities were separated from other activities using the coefficient of variation (CV) of the counts. Mixed model predictions were used to estimate activity intensity. RESULTS The ROC for determining sedentary behavior had greater sensitivity and specificity (71% and 96%) at the hip than the wrist (53% and 76%), as did the ROC for moderate to vigorous physical activity on the hip (70% and 83%) versus the wrist (30% and 69%). The ROC for the CV associated with ambulation had a larger AUC at the hip compared to the wrist (0.83 and 0.74). The prediction model for activity energy expenditure (AEE) resulted in an average difference of 0.55 (+/− 0.55) METs on the hip and 0.82 (+/− 0.93) METs on the wrist. CONCLUSIONS Methods frequently used for estimating AEE and identifying activity intensity thresholds from an accelerometer on the hip generally do better than similar data from an accelerometer on the wrist. Accurately identifying sedentary behavior from a lack of wrist motion presents significant challenges. PMID:23247702
NASA Astrophysics Data System (ADS)
Randerson, J. T.; Chen, Y.; Giglio, L.; Rogers, B. M.; van der Werf, G.
2011-12-01
In several important biomes, including croplands and tropical forests, many small fires exist that have sizes that are well below the detection limit for the current generation of burned area products derived from moderate resolution spectroradiometers. These fires likely have important effects on greenhouse gas and aerosol emissions and regional air quality. Here we developed an approach for combining 1km thermal anomalies (active fires; MOD14A2) and 500m burned area observations (MCD64A1) to estimate the prevalence of these fires and their likely contribution to burned area and carbon emissions. We first estimated active fires within and outside of 500m burn scars in 0.5 degree grid cells during 2001-2010 for which MCD64A1 burned area observations were available. For these two sets of active fires we then examined mean fire radiative power (FRP) and changes in enhanced vegetation index (EVI) derived from 16-day intervals immediately before and after each active fire observation. To estimate the burned area associated with sub-500m fires, we first applied burned area to active fire ratios derived solely from within burned area perimeters to active fires outside of burn perimeters. In a second step, we further modified our sub-500m burned area estimates using EVI changes from active fires outside and within of burned areas (after subtracting EVI changes derived from control regions). We found that in northern and southern Africa savanna regions and in Central and South America dry forest regions, the number of active fires outside of MCD64A1 burned areas increased considerably towards the end of the fire season. EVI changes for active fires outside of burn perimeters were, on average, considerably smaller than EVI changes associated with active fires inside burn scars, providing evidence for burn scars that were substantially smaller than the 25 ha area of a single 500m pixel. FRP estimates also were lower for active fires outside of burn perimeters. In our
Differential representations of prior and likelihood uncertainty in the human brain
Vilares, Iris; Howard, James D; Fernandes, Hugo L; Gottfried, Jay A; Kording, Konrad P
2012-01-01
SUMMARY Background Uncertainty shapes our perception of the world and the decisions we make. Two aspects of uncertainty are commonly distinguished: uncertainty in previously acquired knowledge (prior) and uncertainty in current sensory information (likelihood). Previous studies have established that humans can take both types of uncertainty into account, often in a way predicted by Bayesian statistics. However, the neural representations underlying these parameters remain poorly understood. Results By varying prior and likelihood uncertainty in a decision-making task while performing neuroimaging in humans, we found that prior and likelihood uncertainty had quite distinct representations. While likelihood uncertainty activated brain regions along the early stages of the visuomotor pathway, representations of prior uncertainty were identified in specialized brain areas outside this pathway, including putamen, amygdala, insula, and orbitofrontal cortex. Furthermore, the magnitude of brain activity in the putamen predicted individuals’ personal tendencies to rely more on either prior or current information. Conclusions Our results suggest different pathways by which prior and likelihood uncertainty map onto the human brain, and provide a potential neural correlate for higher reliance on current or prior knowledge. Overall, these findings offer insights into the neural pathways that may allow humans to make decisions close to the optimal defined by a Bayesian statistical framework. PMID:22840519
Likelihood-based Quantification of Agreement between Climate Model Output and NASA Data Records
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Huey, G.; Cressie, N.; Teixeira, J.
2012-12-01
In this talk we discuss the use of formal statistical likelihoods to quantify and assess the consistency of an observed data record with climate model predictions of it. The likelihood function is the conditional probability distribution of an unknown quantity as a function of the conditioning quantity. For instance, if P(A|B) (read ``the probability of A given B") is Gaussian with mean B, then the likelihood function for the mean is a function of different candidate values of B: L(b)=P(A|B=b). It shows how the probability of A changes when we assume different values of B are true. Here we let A be an observational statistic, and b be a climate model identifier. We use the time series generated by that climate model to estimate the sampling distribution of A under the hypothesis that the climate model correctly represents the behavior of the atmosphere. Then we ``score" the agreement between observations and models by the likelihood value, L(b). In this talk, we discuss our computational approach to estimating the sampling distributions, and report results achieved thus far in scoring the climate models used in the CMIP5 decadal experiments against water vapor data records from NASA's AIRS instrument.
The Atacama Cosmology Telescope: Likelihood for Small-Scale CMB Data
NASA Technical Reports Server (NTRS)
Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G. E.; Battaglia, N.; Battistelli, E. S.; Bond, J. R.; Das, S.; Devlin, M. J.; Dunner, R.; Fowler, J. W.; Gralla, M.; Hajian, A.; Halpern, M.; Hasselfield, M.; Hincks, A. D.; Hlozek, R.; Hughes, J. P.; Irwin, K. D.; Kosowsky, A.; Louis, T.; Marriage, T. A.; Marsden, D.; Menanteau, F.; Niemack, M.
2013-01-01
The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with ?2/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation
The Atacama Cosmology Telescope: likelihood for small-scale CMB data
Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G.E.; Halpern, M.; Battaglia, N.; Battistelli, E.S.; Bond, J.R.; Hajian, A.; Hincks, A.D.; Das, S.; Devlin, M.J.; Dünner, R.; Fowler, J.W.; Irwin, K.D.; Gralla, M.; Hasselfield, M.; Hlozek, R.; Hughes, J.P.; Kosowsky, A.; and others
2013-07-01
The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with χ{sup 2}/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation.
Active galactic nucleus black hole mass estimates in the era of time domain astronomy
Kelly, Brandon C.; Treu, Tommaso; Pancoast, Anna; Malkan, Matthew; Woo, Jong-Hak
2013-12-20
We investigate the dependence of the normalization of the high-frequency part of the X-ray and optical power spectral densities (PSDs) on black hole mass for a sample of 39 active galactic nuclei (AGNs) with black hole masses estimated from reverberation mapping or dynamical modeling. We obtained new Swift observations of PG 1426+015, which has the largest estimated black hole mass of the AGNs in our sample. We develop a novel statistical method to estimate the PSD from a light curve of photon counts with arbitrary sampling, eliminating the need to bin a light curve to achieve Gaussian statistics, and we use this technique to estimate the X-ray variability parameters for the faint AGNs in our sample. We find that the normalization of the high-frequency X-ray PSD is inversely proportional to black hole mass. We discuss how to use this scaling relationship to obtain black hole mass estimates from the short timescale X-ray variability amplitude with precision ∼0.38 dex. The amplitude of optical variability on timescales of days is also anticorrelated with black hole mass, but with larger scatter. Instead, the optical variability amplitude exhibits the strongest anticorrelation with luminosity. We conclude with a discussion of the implications of our results for estimating black hole mass from the amplitude of AGN variability.
NASA Technical Reports Server (NTRS)
Manning, Robert M. (Inventor)
2007-01-01
Method and apparatus for estimating signal-to-noise ratio (SNR) gamma of a composite input signal e(t) on a phase modulated (e.g., BPSK) communications link. A first demodulator receives the composite input signal and a stable carrier signal and outputs an in-phase output signal; a second demodulator receives the composite input signal and a phase-shifted version of the carrier signal and outputs a quadrature-phase output signal; and phase error theta(sub E)(t) contained within the composite input signal e(t) is calculated from the outputs of the first and second demodulators. A time series of statistically independent phase error measurements theta(sub E)(t(sub 1)), theta (sub E)(t(sub 2)),..., theta (sub E)(t(sub k)) is obtained from the composite input signal subtending a time interval delta t = t(sub k) - t(sub 1) whose value is small enough such that gamma(t) and sigma(t) can be taken to be constant in delta t. A biased estimate gamma(sup *) for the signal-to-noise ratio (SNR) gamma if the composite input signal is calculated using maximum likelihood (ML) estimation techniques, and an unbiased estimate gamma(sup ^) for the signal-to-noise ratio (SNR) gamma of the composite input signal is determined from the biased estimate gamma(sup *), such as by use of a look-up table.
Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie
2011-01-01
Background Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. Methodology We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. Conclusions This article proposes a network-based multi-target computational estimation method for
The Dud-Alternative Effect in Likelihood Judgment
ERIC Educational Resources Information Center
Windschitl, Paul D.; Chambers, John R.
2004-01-01
The judged likelihood of a focal outcome should generally decrease as the list of alternative possibilities increases. For example, the likelihood that a runner will win a race goes down when 2 new entries are added to the field. However, 6 experiments demonstrate that the presence of implausible alternatives (duds) often increases the judged…
Yamasaki, Taiga; Idehara, Katsutoshi; Xin, Xin
2016-07-01
We propose a new method to estimate muscle activity in a straightforward manner with high accuracy and relatively small computational costs by using the external input of the joint angle and its first to fourth derivatives with respect to time. The method solves the inverse dynamics problem of the skeletal system, the forward dynamics problem of the muscular system, and the load-sharing problem of muscles as a static optimization of neural excitation signals. The external input including the higher-order derivatives is required for a calculation of constraints imposed on the load-sharing problem. The feasibility of the method is demonstrated by the simulation of a simple musculoskeletal model with a single joint. Moreover, the influences of the muscular dynamics, and the higher-order derivatives on the estimation of the muscle activity are demonstrated, showing the results when the time constants of the activation dynamics are very small, and the third and fourth derivatives of the external input are ignored, respectively. It is concluded that the method can have the potential to improve estimation accuracy of muscle activity of highly dynamic motions. PMID:27211782
Rosenblum, Sara; Engel-Yeger, Batya
2015-06-01
It is well established that physical activity during childhood contributes to children's physical and psychological health. The aim of this study was to test the reliability and validity of the Hebrew version of the Teacher Estimation of Activity Form (TEAF) questionnaire as a screening tool among school-aged children in Israel. Six physical education teachers completed TEAF questionnaires of 123 children aged 5-12 years, 68 children (55%) with Typical Development (TD) and 55 children (45%) diagnosed with Developmental Coordination Disorder (DCD). The Hebrew version of the TEAF indicates a very high level of internal consistency (α = .97). There were no significant gender differences. Significant differences were found between children with and without DCD attesting to the test's construct validity. Concurrent validity was established by finding a significant high correlation (r = .76, p < .01) between the TEAF and the Movement-ABC mean scores within the DCD group. The TEAF demonstrated acceptable reliability and validity estimates. It appears to be a promising standardized practical tool in both research and practice for describing information about school-aged children's involvement in physical activity. Further research is indicated with larger samples to establish cut-off scores determining what point identifies hypo activity in striated age groups. Furthermore, the majority of the participants in this study were boys, and further research is needed to include more girls for a better understanding of the phenomena of hypo activity.
NASA Astrophysics Data System (ADS)
Okumura, K.
2011-12-01
Accurate location and geometry of seismic sources are critical to estimate strong ground motion. Complete and precise rupture history is also critical to estimate the probability of the future events. In order to better forecast future earthquakes and to reduce seismic hazards, we should consider over all options and choose the most likely parameter. Multiple options for logic trees are acceptable only after thorough examination of contradicting estimates and should not be a result from easy compromise or epoche. In the process of preparation and revisions of Japanese probabilistic and deterministic earthquake hazard maps by Headquarters for Earthquake Research Promotion since 1996, many decisions were made to select plausible parameters, but many contradicting estimates have been left without thorough examinations. There are several highly-active faults in central Japan such as Itoigawa-Shizuoka Tectonic Line active fault system (ISTL), West Nagano Basin fault system (WNBF), Inadani fault system (INFS), and Atera fault system (ATFS). The highest slip rate and the shortest recurrence interval are respectively ~1 cm/yr and 500 to 800 years, and estimated maximum magnitude is 7.5 to 8.5. Those faults are very hazardous because almost entire population and industries are located above the fault within tectonic depressions. As to the fault location, most uncertainties arises from interpretation of geomorphic features. Geomorphological interpretation without geological and structural insight often leads to wrong mapping. Though non-existent longer fault may be a safer estimate, incorrectness harm reliability of the forecast. Also this does not greatly affect strong motion estimates, but misleading to surface displacement issues. Fault geometry, on the other hand, is very important to estimate intensity distribution. For the middle portion of the ISTL, fast-moving left-lateral strike-slip up to 1 cm/yr is obvious. Recent seismicity possibly induced by 2011 Tohoku
Young, Jonathan; Thompson, Sandra E.; Brothers, Alan J.; Whitney, Paul D.; Coles, Garill A.; Henderson, Cindy L.; Wolf, Katherine E.; Hoopes, Bonnie L.
2008-12-01
The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies – a critical challenge is linking across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material.
Shao, Qi; Buchanan, Thomas S.
2008-01-01
We have created a model to estimate the corrective changes in muscle activation patterns needed for a person who has had a stroke to walk with an improved gait—nearing that of an unimpaired person. Using this model, we examined how different functional electrical stimulation (FES) protocols would alter gait patterns. The approach is based on an electromyographically (EMG) driven model to estimate joint moments. Different stimulation protocols were examined which generated different corrective muscle activation patterns. These approaches grouped the muscles together into flexor and extensor groups (to simulate FES using surface electrodes) or left each muscle to vary independently (to simulate FES using intramuscular electrodes). In addition, we limited the maximal change in muscle activation (to reduce fatigue). We observed that with the two protocols (grouped and ungrouped muscles), the calculated corrective changes in muscle activation yielded improved joint moments nearly matching those of unimpaired subjects. The protocols yielded different muscle activation patterns, which could be selected based on practical condition. These calculated corrective muscle activation changes can be used in studying FES protocols, to determine the feasibility of gait retraining with FES for a given subject and to determine which protocols are most reasonable. PMID:18762296
Heersink, Daniel K; Caley, Peter; Paini, Dean R; Barry, Simon C
2016-05-01
The cost of an uncontrolled incursion of invasive alien species (IAS) arising from undetected entry through ports can be substantial, and knowledge of port-specific risks is needed to help allocate limited surveillance resources. Quantifying the establishment likelihood of such an incursion requires quantifying the ability of a species to enter, establish, and spread. Estimation of the approach rate of IAS into ports provides a measure of likelihood of entry. Data on the approach rate of IAS are typically sparse, and the combinations of risk factors relating to country of origin and port of arrival diverse. This presents challenges to making formal statistical inference on establishment likelihood. Here we demonstrate how these challenges can be overcome with judicious use of mixed-effects models when estimating the incursion likelihood into Australia of the European (Apis mellifera) and Asian (A. cerana) honeybees, along with the invasive parasites of biosecurity concern they host (e.g., Varroa destructor). Our results demonstrate how skewed the establishment likelihood is, with one-tenth of the ports accounting for 80% or more of the likelihood for both species. These results have been utilized by biosecurity agencies in the allocation of resources to the surveillance of maritime ports.
... the likelihood of critical missed steps in the operating room Previous Page Next Page Table of Contents Research Activities, July 2013 Care of Oklahoma tornado victims helped by AHRQ-supported information system From the Director Certain therapies and medications improve ...
Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji
2011-12-15
Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) which utilizes a newly defined similarity between samples is proposed to estimate active pharmaceutical ingredient (API) content in granules for tableting. In addition, a statistical wavelength selection method which quantifies the effect of API content and other factors on NIR spectra is proposed. LW-PLS and the proposed wavelength selection method were applied to real process data provided by Daiichi Sankyo Co., Ltd., and the estimation accuracy was improved by 38.6% in root mean square error of prediction (RMSEP) compared to the conventional PLS using wavelengths selected on the basis of variable importance on the projection (VIP). The results clearly show that the proposed calibration modeling technique is useful for API content estimation and is superior to the conventional one. PMID:22001843
Family Characteristics Associated with Likelihood of Varicella Vaccination
Weinmann, Sheila; Mullooly, John P; Drew, Lois; Chun, Colleen S
2016-01-01
Context: The introduction of the varicella vaccine as a routine pediatric immunization in the US, in 1995, provided an opportunity to assess factors associated with uptake of new vaccines in the member population of the Kaiser Permanente Northwest (KPNW) Health Plan. Objective: Identify factors associated with varicella vaccination in the KPNW population in the first five years after varicella vaccine was introduced. Design: A retrospective cohort of children under age 13 years between June 1995 and December 1999, without a history of varicella disease was identified using KPNW automated data. Membership records were linked to vaccine databases. Cox regression was used to estimate likelihood of varicella vaccination during the study period in relation to age, sex, primary clinician’s specialty, and Medicaid eligibility. For a subset whose parents answered a behavioral health survey, additional demographic and behavioral characteristics were evaluated. Main Outcome Measure: Varicella vaccination. Results: We identified 88,646 children under age 13 years without a history of varicella; 22% were vaccinated during the study period. Varicella vaccination was more likely among children who were born after 1995, were not Medicaid recipients, or had pediatricians as primary clinicians. In the survey-linked cohort, positively associated family characteristics included smaller family size; higher socioeconomic status; and parents who were older, were college graduates, reported excellent health, and received influenza vaccination. Conclusion: Understanding predictors of early varicella vaccine-era vaccine acceptance may help in planning for introduction of new vaccines to routine schedules. PMID:27104589
Comparisons of likelihood and machine learning methods of individual classification
Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.
2002-01-01
“Assignment tests” are designed to determine population membership for individuals. One particular application based on a likelihood estimate (LE) was introduced by Paetkau et al. (1995; see also Vásquez-Domínguez et al. 2001) to assign an individual to the population of origin on the basis of multilocus genotype and expectations of observing this genotype in each potential source population. The LE approach can be implemented statistically in a Bayesian framework as a convenient way to evaluate hypotheses of plausible genealogical relationships (e.g., that an individual possesses an ancestor in another population) (Dawson and Belkhir 2001;Pritchard et al. 2000; Rannala and Mountain 1997). Other studies have evaluated the confidence of the assignment (Almudevar 2000) and characteristics of genotypic data (e.g., degree of population divergence, number of loci, number of individuals, number of alleles) that lead to greater population assignment (Bernatchez and Duchesne 2000; Cornuet et al. 1999; Haig et al. 1997; Shriver et al. 1997; Smouse and Chevillon 1998). Main statistical and conceptual differences between methods leading to the use of an assignment test are given in, for example,Cornuet et al. (1999) and Rosenberg et al. (2001). Howeve
Evaluation of activity images in dynamics speckle in search of objective estimators
NASA Astrophysics Data System (ADS)
Avendaño Montecinos, Marcos; Mora Canales, Victor; Cap, Nelly; Grumel, Eduardo; Rabal, Hector; Trivi, Marcelo; Baradit, Erik
2015-08-01
We explore the performance of two algorithms to screen loci of equal activity in dynamic speckle images. Dynamic speckle images are currently applied to several applications in medicine, biology, agriculture and other disciplines. Nevertheless, no objective standard has been proposed so far to evaluate the performance of the algorithms, which must be then relied on subjective appreciations. We use two case studies of activity that do not bear the biologic inherent variability to test the methods: "Generalized Differences" and "Fujii", looking for a standard to evaluate their performance in an objective way. As study cases, we use the drying of paint on an (assumed) unknown topography, namely the surface of a coin, and the activity due to pre heating a piece of paper that hides writings in the surface under the paper. A known object of simple topography is included in the image, besides of the painted coin, consisting in a paint pool where the depth is a linear function of its position. Both algorithms are applied to the images and the intensity profile of the results along the linear region of the pool activity is used to estimate the depth of the geometric topography hidden under paint in the coin. The accuracy of the result is used as a merit estimation of the corresponding algorithm. In the other experiment, a hidden dark bar printed on paper is covered with one or two paper leaves, slightly pre heated with a lamp and activity images registered and processed with both algorithms. The intensity profile of the activity images is used to estimate which method gives a better description of the bar edges images and their deterioration. Experimental results are shown.
Gaye-Siessegger, Julia; Mamun, Shamsuddin M; Brinker, Alexander; Focken, Ulfert
2013-04-01
For diet reconstruction studies using stable isotopes, accurate estimates of trophic shift (Δδtrophic) are necessary to get reliable results. Several factors have been identified which affect the trophic shift. The goal of the present experiment was to test whether measurements of the activities of enzymes could improve the accuracy of estimation of trophic shift in fish. Forty-eight Nile tilapia (Oreochromis niloticus) were fed under controlled conditions with two diets differing in their protein content (21 and 41%) each at four different levels (4, 8, 12 and 16gkg(-0.8)d(-1)). At the end of the feeding experiment, proximate composition, whole body δ(13)C and δ(15)N as well as the activities of enzymes involved in anabolism and catabolism were measured. Step-wise regression specified contributing variables for Δδ(15)N (malic enzyme, aspartate aminotransferase and protein content) and Δδ(13)Clipid-free material (aspartate aminotransferase and protein content). Explained variation by using the significant main effects was about 70% for Δδ(15)N and Δδ(13)Clipid-free material, respectively. The results of the present study indicate that enzyme activities are suitable indicators to improve estimates of trophic shift.
Development of weight and cost estimates for lifting surfaces with active controls
NASA Technical Reports Server (NTRS)
Anderson, R. D.; Flora, C. C.; Nelson, R. M.; Raymond, E. T.; Vincent, J. H.
1976-01-01
Equations and methodology were developed for estimating the weight and cost incrementals due to active controls added to the wing and horizontal tail of a subsonic transport airplane. The methods are sufficiently generalized to be suitable for preliminary design. Supporting methodology and input specifications for the weight and cost equations are provided. The weight and cost equations are structured to be flexible in terms of the active control technology (ACT) flight control system specification. In order to present a self-contained package, methodology is also presented for generating ACT flight control system characteristics for the weight and cost equations. Use of the methodology is illustrated.
Data supporting the spectrophotometric method for the estimation of catalase activity
Hadwan, Mahmoud Hussein; Abed, Hussein Najm
2015-01-01
Here we provide raw and processed data and methods for the estimation of catalase activities. The method for presenting a simple and accurate colorimetric assay for catalase activities is described. This method is based on the reaction of undecomposed hydrogen peroxide with ammonium molybdate to produce a yellowish color, which has a maximum absorbance at 374 nm. The method is characterized by adding a correction factor to exclude the interference that arises from the presence of amino acids and proteins in serum. The assay acts to keep out the interferences that arose from measurement of absorbance at unsuitable wavelengths. PMID:26862558
Estimating Active Transportation Behaviors to Support Health Impact Assessment in the United States
Mansfield, Theodore J.; Gibson, Jacqueline MacDonald
2016-01-01
Health impact assessment (HIA) has been promoted as a means to encourage transportation and city planners to incorporate health considerations into their decision-making. Ideally, HIAs would include quantitative estimates of the population health effects of alternative planning scenarios, such as scenarios with and without infrastructure to support walking and cycling. However, the lack of baseline estimates of time spent walking or biking for transportation (together known as “active transportation”), which are critically related to health, often prevents planners from developing such quantitative estimates. To address this gap, we use data from the 2009 US National Household Travel Survey to develop a statistical model that estimates baseline time spent walking and biking as a function of the type of transportation used to commute to work along with demographic and built environment variables. We validate the model using survey data from the Raleigh–Durham–Chapel Hill, NC, USA, metropolitan area. We illustrate how the validated model could be used to support transportation-related HIAs by estimating the potential health benefits of built environment modifications that support walking and cycling. Our statistical model estimates that on average, individuals who commute on foot spend an additional 19.8 (95% CI 16.9–23.2) minutes per day walking compared to automobile commuters. Public transit riders walk an additional 5.0 (95% CI 3.5–6.4) minutes per day compared to automobile commuters. Bicycle commuters cycle for an additional 28.0 (95% CI 17.5–38.1) minutes per day compared to automobile commuters. The statistical model was able to predict observed transportation physical activity in the Raleigh–Durham–Chapel Hill region to within 0.5 MET-hours per day (equivalent to about 9 min of daily walking time) for 83% of observations. Across the Raleigh–Durham–Chapel Hill region, an estimated 38 (95% CI 15–59) premature deaths potentially could
Estimating Active Transportation Behaviors to Support Health Impact Assessment in the United States.
Mansfield, Theodore J; Gibson, Jacqueline MacDonald
2016-01-01
Health impact assessment (HIA) has been promoted as a means to encourage transportation and city planners to incorporate health considerations into their decision-making. Ideally, HIAs would include quantitative estimates of the population health effects of alternative planning scenarios, such as scenarios with and without infrastructure to support walking and cycling. However, the lack of baseline estimates of time spent walking or biking for transportation (together known as "active transportation"), which are critically related to health, often prevents planners from developing such quantitative estimates. To address this gap, we use data from the 2009 US National Household Travel Survey to develop a statistical model that estimates baseline time spent walking and biking as a function of the type of transportation used to commute to work along with demographic and built environment variables. We validate the model using survey data from the Raleigh-Durham-Chapel Hill, NC, USA, metropolitan area. We illustrate how the validated model could be used to support transportation-related HIAs by estimating the potential health benefits of built environment modifications that support walking and cycling. Our statistical model estimates that on average, individuals who commute on foot spend an additional 19.8 (95% CI 16.9-23.2) minutes per day walking compared to automobile commuters. Public transit riders walk an additional 5.0 (95% CI 3.5-6.4) minutes per day compared to automobile commuters. Bicycle commuters cycle for an additional 28.0 (95% CI 17.5-38.1) minutes per day compared to automobile commuters. The statistical model was able to predict observed transportation physical activity in the Raleigh-Durham-Chapel Hill region to within 0.5 MET-hours per day (equivalent to about 9 min of daily walking time) for 83% of observations. Across the Raleigh-Durham-Chapel Hill region, an estimated 38 (95% CI 15-59) premature deaths potentially could be avoided if the entire
Singh, D R; Singh, Shrawan; Salim, K M; Srivastava, R C
2012-06-01
The present study aimed to determine the antioxidant activity and phytochemical contents in 10 underutilized fruits of Andaman Islands (India) namely Malpighia glabra L., Mangifera andamanica L., Morinda citrifolia L., Syzygium aqueum (Burm.f) Alst., Annona squamosa L., Averrhoa carambola L., Averrhoa bilimbi L., Dillenia indica L., Annona muricata L. and Ficus racemosa L. The antioxidant activity varied from 74.27% to 98.77%, and the methanol extract of M. glabra showed the highest antioxidant activity (98.77%; inhibitory concentration, IC(50) = 262.46 μg/ml). Methanol was found to be a better solvent than acetone and aqueous for estimating the antioxidant activity. M. glabra was found to be rich in phytochemicals viz. polyphenol (355.74 mg/100 g), anthocyanin (91.31 mg/100 g), carotenoids (109.16 mg/100 g), tannin (24.39 mg/100 g) and ascorbic acid (394.23 mg/100 g). Carbohydrate content was estimated to be highest in M. glabra (548 mg/100 g). Phenols, tannins, anthocyanins and carotenoids contents showed positive correlation (r² = 0.846, r² = 0.864, r² = 0.915 and r² = 0.806, respectively) with antioxidant activity. The information generated in present study will be useful for bioprospecting of underutilized fruits of Andaman Islands.
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.; Maranas, Costas D.
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.; Maranas, Costas D.
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary
Benedict, Matthew N; Mundy, Michael B; Henry, Christopher S; Chia, Nicholas; Price, Nathan D
2014-10-01
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to
Cosmological Parameters from CMB Maps without Likelihood Approximation
NASA Astrophysics Data System (ADS)
Racine, B.; Jewell, J. B.; Eriksen, H. K.; Wehus, I. K.
2016-03-01
We propose an efficient Bayesian Markov chain Monte Carlo (MCMC) algorithm for estimating cosmological parameters from cosmic microwave background (CMB) data without the use of likelihood approximations. It builds on a previously developed Gibbs sampling framework that allows for exploration of the joint CMB sky signal and power spectrum posterior, P({\\boldsymbol{s}},{C}{\\ell }| {\\boldsymbol{d}}), and addresses a long-standing problem of efficient parameter estimation simultaneously in regimes of high and low signal-to-noise ratio. To achieve this, our new algorithm introduces a joint Markov chain move in which both the signal map and power spectrum are synchronously modified, by rescaling the map according to the proposed power spectrum before evaluating the Metropolis-Hastings accept probability. Such a move was already introduced by Jewell et al., who used it to explore low signal-to-noise posteriors. However, they also found that the same algorithm is inefficient in the high signal-to-noise regime, since a brute-force rescaling operation does not account for phase information. This problem is mitigated in the new algorithm by subtracting the Wiener filter mean field from the proposed map prior to rescaling, leaving high signal-to-noise information invariant in the joint step, and effectively only rescaling the low signal-to-noise component. To explore the full posterior, the new joint move is then interleaved with a standard conditional Gibbs move for the sky map. We apply our new algorithm to simplified simulations for which we can evaluate the exact posterior to study both its accuracy and its performance, and find good agreement with the exact posterior; marginal means agree to ≲0.006σ and standard deviations to better than ˜3%. The Markov chain correlation length is of the same order of magnitude as those obtained by other standard samplers in the field.
Ku, L.; Kolibal, J.G.
1982-06-01
The neutron induced material activation dose rate data are summarized for the TFTR operation. This report marks the completion of the second phase of the systematic study of the activation problem on the TFTR. The estimations of the neutron induced activation dose rates were made for spherical and slab objects, based on a point kernel method, for a wide range of materials. The dose rates as a function of cooling time for standard samples are presented for a number of typical neutron spectrum expected during TFTR DD and DT operations. The factors which account for the variations of the pulsing history, the characteristic size of the object and the distance of observation relative to the standard samples are also presented.
Bornstein, Daniel B; Beets, Michael W; Byun, Wonwoo; Welk, Greg; Bottai, Matteo; Dowda, Marsha; Pate, Russell
2011-09-01
No universally accepted ActiGraph accelerometer cutpoints for quantifying moderate-to-vigorous physical activity (MVPA) exist. Estimates of MVPA from one set of cutpoints cannot be directly compared to MVPA estimates using different cutpoints, even when the same outcome units are reported (MVPA mind(-1)). The purpose of this study was to illustrate the utility of an equating system that translates reported MVPA estimates from one set of cutpoints into another, to better inform public health policy. Secondary data analysis. ActiGraph data from a large preschool project (N=419, 3-6-yr-olds, CHAMPS) was used to conduct the analyses. Conversions were made among five different published MVPA cutpoints for children: Pate (PT), Sirard (SR), Puyau (PY), Van Cauwengerghe (VC), and Freedson Equation (FR). A 10-fold cross-validation procedure was used to develop prediction equations using MVPA estimated from each of the five sets of cutpoints as the dependent variable, with estimated MVPA from one of the other four sets of cutpoints (e.g., PT MVPA predicted from FR MVPA). The mean levels of MVPA for the total sample ranged from 22.5 (PY) to 269.0 (FR) mind(-1). Across the prediction models (5 total), the median proportion of variance explained (R(2)) was 0.76 (range 0.48-0.97). The median absolute percent error was 17.2% (range 6.3-38.4%). The prediction equations developed here allow for direct comparisons between studies employing different ActiGraph cutpoints in preschool-age children. These prediction equations give public health researchers and policy makers a more concise picture of physical activity levels of preschool-aged children. PMID:21524938
Kadengye, Damazo T; Cools, Wilfried; Ceulemans, Eva; Van den Noortgate, Wim
2012-06-01
Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multiple-imputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.
A New Black Hole Mass Estimate for Obscured Active Galactic Nuclei
NASA Astrophysics Data System (ADS)
Minezaki, Takeo; Matsushita, Kyoko
2015-04-01
We propose a new method for estimating the mass of a supermassive black hole, applicable to obscured active galactic nuclei (AGNs). This method estimates the black hole mass using the width of the narrow core of the neutral FeKα emission line in X-rays and the distance of its emitting region from the black hole based on the isotropic luminosity indicator via the luminosity scaling relation. Assuming the virial relation between the locations and the velocity widths of the neutral FeKα line core and the broad Hβ emission line, the luminosity scaling relation of the neutral FeKα line core emitting region is estimated. We find that the velocity width of the neutral FeKα line core falls between that of the broad Balmer emission lines and the corresponding value at the dust reverberation radius for most of the target AGNs. The black hole mass {{M}BH,FeKα } estimated with this method is then compared with other black hole mass estimates, such as the broad emission-line reverberation mass {{M}BH,rev} for type 1 AGNs, the mass {{M}BH,{{H2}O}} based on the H2O maser, and the single-epoch mass estimate {{M}BH,pol} based on the polarized broad Balmer lines for type 2 AGNs. We find that {{M}BH,FeKα } is consistent with {{M}BH,rev} and {{M}BH,pol}, and find that {{M}BH,FeKα } correlates well with {{M}BH,{{H2}O}}. These results suggest that {{M}BH,FeKα } is a potential indicator of the black hole mass for obscured AGNs. In contrast, {{M}BH,FeKα } is systematically larger than {{M}BH,{{H2}O}} by about a factor of 5, and the possible origins are discussed.
Critical analysis of the Hanford spent nuclear fuel project activity based cost estimate
Warren, R.N.
1998-09-29
In 1997, the SNFP developed a baseline change request (BCR) and submitted it to DOE-RL for approval. The schedule was formally evaluated to have a 19% probability of success [Williams, 1998]. In December 1997, DOE-RL Manager John Wagoner approved the BCR contingent upon a subsequent independent review of the new baseline. The SNFP took several actions during the first quarter of 1998 to prepare for the independent review. The project developed the Estimating Requirements and Implementation Guide [DESH, 1998] and trained cost account managers (CAMS) and other personnel involved in the estimating process in activity-based cost (ABC) estimating techniques. The SNFP then applied ABC estimating techniques to develop the basis for the December Baseline (DB) and documented that basis in Basis of Estimate (BOE) books. These BOEs were provided to DOE in April 1998. DOE commissioned Professional Analysis, Inc. (PAI) to perform a critical analysis (CA) of the DB. PAI`s review formally began on April 13. PAI performed the CA, provided three sets of findings to the SNFP contractor, and initiated reconciliation meetings. During the course of PAI`s review, DOE directed the SNFP to develop a new baseline with a higher probability of success. The contractor transmitted the new baseline, which is referred to as the High Probability Baseline (HPB), to DOE on April 15, 1998 [Williams, 1998]. The HPB was estimated to approach a 90% confidence level on the start of fuel movement [Williams, 1998]. This high probability resulted in an increased cost and a schedule extension. To implement the new baseline, the contractor initiated 26 BCRs with supporting BOES. PAI`s scope was revised on April 28 to add reviewing the HPB and the associated BCRs and BOES.
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
Saavedra, Serguei; Rohr, Rudolf P; Fortuna, Miguel A; Selva, Nuria; Bascompte, Jordi
2016-04-01
Many of the observed species interactions embedded in ecological communities are not permanent, but are characterized by temporal changes that are observed along with abiotic and biotic variations. While work has been done describing and quantifying these changes, little is known about their consequences for species coexistence. Here, we investigate the extent to which changes of species composition impact the likelihood of persistence of the predator-prey community in the highly seasonal Białowieza Primeval Forest (northeast Poland), and the extent to which seasonal changes of species interactions (predator diet) modulate the expected impact. This likelihood is estimated extending recent developments on the study of structural stability in ecological communities. We find that the observed species turnover strongly varies the likelihood of community persistence between summer and winter. Importantly, we demonstrate that the observed seasonal interaction changes minimize the variation in the likelihood of persistence associated with species turnover across the year. We find that these community dynamics can be explained as the coupling of individual species to their environment by minimizing both the variation in persistence conditions and the interaction changes between seasons. Our results provide a homeostatic explanation for seasonal species interactions and suggest that monitoring the association of interactions changes with the level of variation in community dynamics can provide a good indicator of the response of species to environmental pressures. PMID:27220203
NASA Astrophysics Data System (ADS)
Nicora, M. G.; Bürgesser, R. E.; Quel, E. J.; Avila, E.
2013-05-01
The information about lightning activity is fundamental to atmospheric surveillance due its relevant applications on different aspects as security, defense, early warning system and for generation of statistical data for planning infrastructure projects. Few countries in the world have its own lightning detection networks which allow monitoring the lightning activity inside its national border. The development of the World Wide Lightning Location Network (WWLLN) provides a confidence global lightning data with low cost, which was already used to characterize the lightning activity in several regions of the world. The aim of the present work is the use of the lightning data obtained by the WWLLN to make an analysis on the lightning activity over Argentina between the years 2005-2012. In order to achieve this objective the isoceraunic maps of Argentina were made. These maps will be incorporated to the standard IRAM 2184-11 "lightning protection". Furthermore, by using data of flash per km2 per year, we provide a model for estimating deaths from lightning. The model is based on the parameterizations of the flash density, population density and urbanization of a given region. The model was adapted for Argentina and Brazil in order to obtain an estimation of deaths in different regions. The results obtained allows to promote protective behaviors in the population.
NASA Astrophysics Data System (ADS)
Tan, Puay Yok; Ismail, Mirza Rifqi Bin
2016-02-01
Photosynthetically active radiation (PAR) is an important input variable for urban climate, crop modelling and ecosystem services studies. Despite its importance, only a few empirical studies have been conducted on PAR, its relationship to global solar radiation and sky conditions and its estimation in the tropics. We report in this study, the characterisation of PAR in Singapore through direct measurements and development of models for its estimation using input variables of global solar radiation ( H), photometric radiation ( L), clearness index ( k t ) and sky view factor (SVF). Daily PAR showed a good correlation with daily H and had a comparatively small seasonal variation in PAR due to Singapore's equatorial position. The ratio of PAR to H ( PAR/ H) showed a slight depression in midyear from May to August, which correlated well with seasonal patterns in rainfall over the study period. Hourly PAR/ H increased throughout the day. Three empirical models developed in this study were able to predict daily PAR satisfactorily, with the most accurate model being one which included both H and k t as independent variables. A regression model for estimation of PAR under shaded conditions using SVF produced satisfactory estimation of daily PAR but was prone to high mean percentage error at low PAR levels.
Estimating forest and woodland aboveground biomass using active and passive remote sensing
Wu, Zhuoting; Dye, Dennis G.; Vogel, John M.; Middleton, Barry R.
2016-01-01
Aboveground biomass was estimated from active and passive remote sensing sources, including airborne lidar and Landsat-8 satellites, in an eastern Arizona (USA) study area comprised of forest and woodland ecosystems. Compared to field measurements, airborne lidar enabled direct estimation of individual tree height with a slope of 0.98 (R2 = 0.98). At the plot-level, lidar-derived height and intensity metrics provided the most robust estimate for aboveground biomass, producing dominant species-based aboveground models with errors ranging from 4 to 14Mg ha –1 across all woodland and forest species. Landsat-8 imagery produced dominant species-based aboveground biomass models with errors ranging from 10 to 28 Mg ha –1. Thus, airborne lidar allowed for estimates for fine-scale aboveground biomass mapping with low uncertainty, while Landsat-8 seems best suited for broader spatial scale products such as a national biomass essential climate variable (ECV) based on land cover types for the United States.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
Estimating Youth Locomotion Ground Reaction Forces Using an Accelerometer-Based Activity Monitor
Neugebauer, Jennifer M.; Hawkins, David A.; Beckett, Laurel
2012-01-01
To address a variety of questions pertaining to the interactions between physical activity, musculoskeletal loading and musculoskeletal health/injury/adaptation, simple methods are needed to quantify, outside a laboratory setting, the forces acting on the human body during daily activities. The purpose of this study was to develop a statistically based model to estimate peak vertical ground reaction force (pVGRF) during youth gait. 20 girls (10.9±0.9 years) and 15 boys (12.5±0.6 years) wore a Biotrainer AM over their right hip. Six walking and six running trials were completed after a standard warm-up. Average AM intensity (g) and pVGRF (N) during stance were determined. Repeated measures mixed effects regression models to estimate pVGRF from Biotrainer activity monitor acceleration in youth (girls 10–12, boys 12–14 years) while walking and running were developed. Log transformed pVGRF had a statistically significant relationship with activity monitor acceleration, centered mass, sex (girl), type of locomotion (run), and locomotion type-acceleration interaction controlling for subject as a random effect. A generalized regression model without subject specific random effects was also developed. The average absolute differences between the actual and predicted pVGRF were 5.2% (1.6% standard deviation) and 9% (4.2% standard deviation) using the mixed and generalized models, respectively. The results of this study support the use of estimating pVGRF from hip acceleration using a mixed model regression equation. PMID:23133564
Trunk Muscle Activation and Estimating Spinal Compressive Force in Rope and Harness Vertical Dance.
Wilson, Margaret; Dai, Boyi; Zhu, Qin; Humphrey, Neil
2015-12-01
Rope and harness vertical dance takes place off the floor with the dancer suspended from his or her center of mass in a harness attached to a rope from a point overhead. Vertical dance represents a novel environment for training and performing in which expected stresses on the dancer's body are different from those that take place during dance on the floor. Two male and eleven female dancers with training in vertical dance performed six typical vertical dance movements with electromyography (EMG) electrodes placed bilaterally on rectus abdominus, external oblique, erector spinae, and latissimus dorsi. EMG data were expressed as a percentage of maximum voluntary isometric contraction (MVIC). A simplified musculoskeletal model based on muscle activation for these four muscle groups was used to estimate the compressive force on the spine. The greatest muscle activation for erector spinae and latissimus dorsi and the greatest trunk compressive forces were seen in vertical axis positions where the dancer was moving the trunk into a hyper-extended position. The greatest muscle activation for rectus abdominus and external oblique and the second highest compressive force were seen in a supine position with the arms and legs extended away from the center of mass (COM). The least muscle activation occurred in positions where the limbs were hanging below the torso. These movements also showed relatively low muscle activation compression forces. Post-test survey results revealed that dancers felt comfortable in these positions; however, observation of some positions indicated insufficient muscular control. Computing the relative contribution of muscles, expressed as muscle activation and estimated spinal compression, provided a measure of how much the muscle groups were working to support the spine and the rest of the dancer's body in the different movements tested. Additionally, identifying typical muscle recruitment patterns in each movement will help identify key exercises
A new estimator for activity on dynamic speckles based on contrast of sucessive correlations
NASA Astrophysics Data System (ADS)
da Silva, Emerson Rodrigo; Rabal, Hector Jorge; Júnior, Mauro Favoretto; Muramatsu, Mikiya
2008-04-01
In this work, we propose the contrast of sucessive correlations as a valid estimator for activity on dynamic speckles. We call "sucessive correlations" the correlation coeficients between two sucessive instants recorded on a Time History Speckle Pattern (THSP). In following, by dividing the standard deviation of these coeficients by their mean value, we get the corresponding contrast of sucessive correlations. A wide number of THSPs was simulated, using a numerical model previous developed by one of the authors, and the new index was compared with the corresponding inertia moments of cooccurrence matrices. A comparison with real THSPs, obtained during a drying paint process, was done. We found a strong correlation between the contrast of sucessive correlations and the inertia moment, sugesting the validity of this new estimator.
Synergism of active and passive microwave data for estimating bare surface soil moisture
NASA Technical Reports Server (NTRS)
Saatchi, Sasan S.; Njoku, Eni G.; Wegmueller, Urs
1993-01-01
Active and passive microwave sensors were applied effectively to the problem of estimating the surface soil moisture in a variety of environmental conditions. Research to date has shown that both types of sensors are also sensitive to the surface roughness and the vegetation cover. In estimating the soil moisture, the effect of the vegetation and roughness are often corrected either by acquiring multi-configuration (frequency and polarization) data or by adjusting the surface parameters in order to match the model predictions to the measured data. Due to the limitations on multi-configuration spaceborne data and the lack of a priori knowledge of the surface characteristics for parameter adjustments, it was suggested that the synergistic use of the sensors may improve the estimation of the soil moisture over the extreme range of naturally occurring soil and vegetation conditions. To investigate this problem, the backscattering and emission from a bare soil surface using the classical rough surface scattering theory were modeled. The model combines the small perturbation and the Kirchhoff approximations in conjunction with the Peak formulation to cover a wide range of surface roughness parameters with respect to frequency for both active and passive measurements. In this approach, the same analytical method was used to calculate the backscattering and emissivity. Therefore, the active and passive simulations can be combined at various polarizations and frequencies in order to estimate the soil moisture more actively. As a result, it is shown that (1) the emissivity is less dependent on the surface correlation length, (2) the ratio of the backscattering coefficient (HH) over the surface reflectivity (H) is almost independent of the soil moisture for a wide range of surface roughness, and (3) this ratio can be approximated as a linear function of the surface rms height. The results were compared with the data obtained by a multi-frequency radiometer
Heinze, Georg; Ploner, Meinhard; Beyea, Jan
2013-12-20
In the logistic regression analysis of a small-sized, case-control study on Alzheimer's disease, some of the risk factors exhibited missing values, motivating the use of multiple imputation. Usually, Rubin's rules (RR) for combining point estimates and variances would then be used to estimate (symmetric) confidence intervals (CIs), on the assumption that the regression coefficients were distributed normally. Yet, rarely is this assumption tested, with or without transformation. In analyses of small, sparse, or nearly separated data sets, such symmetric CI may not be reliable. Thus, RR alternatives have been considered, for example, Bayesian sampling methods, but not yet those that combine profile likelihoods, particularly penalized profile likelihoods, which can remove first order biases and guarantee convergence of parameter estimation. To fill the gap, we consider the combination of penalized likelihood profiles (CLIP) by expressing them as posterior cumulative distribution functions (CDFs) obtained via a chi-squared approximation to the penalized likelihood ratio statistic. CDFs from multiple imputations can then easily be averaged into a combined CDF c , allowing confidence limits for a parameter β at level 1 - α to be identified as those β* and β** that satisfy CDF c (β*) = α ∕ 2 and CDF c (β**) = 1 - α ∕ 2. We demonstrate that the CLIP method outperforms RR in analyzing both simulated data and data from our motivating example. CLIP can also be useful as a confirmatory tool, should it show that the simpler RR are adequate for extended analysis. We also compare the performance of CLIP to Bayesian sampling methods using Markov chain Monte Carlo. CLIP is available in the R package logistf. PMID:23873477
Weibull distribution based on maximum likelihood with interval inspection data
NASA Technical Reports Server (NTRS)
Rheinfurth, M. H.
1985-01-01
The two Weibull parameters based upon the method of maximum likelihood are determined. The test data used were failures observed at inspection intervals. The application was the reliability analysis of the SSME oxidizer turbine blades.
Low-complexity approximations to maximum likelihood MPSK modulation classification
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2004-01-01
We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.
A notion of graph likelihood and an infinite monkey theorem
NASA Astrophysics Data System (ADS)
Banerji, Christopher R. S.; Mansour, Toufik; Severini, Simone
2014-01-01
We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.
New approaches to estimation of peat deposits for production of biologically active compounds
NASA Astrophysics Data System (ADS)
Stepchenko, L. M.; Yurchenko, V. I.; Krasnik, V. G.; Syedykh, N. J.
2009-04-01
It is known, that biologically active preparations from peat increase animals productivity as well as resistance against stress-factors and have adaptogeneous, antioxidant, immunomodulative properties. Optymal choice of peat deposits for the production of biologically active preparations supposes the detailed comparative analysis of peat properties from different deposits. For this the cadastre of peat of Ukraine is developed in the humic substances laboratory named after prof. Khristeva L.A. (Dnipropetrovsk Agrarian University, Ukraine). It based on the research of its physical and chemical properties, toxicity and biological activity, and called Biocadastre. The Biocadastre is based on the set of parameters, including the descriptions of physical and chemical properties (active acidity, degree of decomposition, botanical composition etc.), toxicity estimation (by parabyotyc, infusorial, inhibitor and other tests), biological activity indexes (growth-promoting, antioxidative, adaptogeneous, immunomodulative antistress and other actions). The blocks of Biocadastre indexes are differentiated, taking into account their use for creation the preparations for vegetable, animals and microorganisms. The Biocadastre will allow to choose the peat deposits, most suitable for the production of different biologically active preparations, both wide directed and narrow spectrum of action, depending on application fields (medicine, agriculture, veterinary medicine, microbiological industry, balneology, cosmetology).
Optimal likelihood-based matching of volcanic sources and deposits in the Auckland Volcanic Field
NASA Astrophysics Data System (ADS)
Kawabata, Emily; Bebbington, Mark S.; Cronin, Shane J.; Wang, Ting
2016-09-01
In monogenetic volcanic fields, where each eruption forms a new volcano, focusing and migration of activity over time is a very real possibility. In order for hazard estimates to reflect future, rather than past, behavior, it is vital to assemble as much reliable age data as possible on past eruptions. Multiple swamp/lake records have been extracted from the Auckland Volcanic Field, underlying the 1.4 million-population city of Auckland. We examine here the problem of matching these dated deposits to the volcanoes that produced them. The simplest issue is separation in time, which is handled by simulating prior volcano age sequences from direct dates where known, thinned via ordering constraints between the volcanoes. The subproblem of varying deposition thicknesses (which may be zero) at five locations of known distance and azimuth is quantified using a statistical attenuation model for the volcanic ash thickness. These elements are combined with other constraints, from widespread fingerprinted ash layers that separate eruptions and time-censoring of the records, into a likelihood that was optimized via linear programming. A second linear program was used to optimize over the Monte-Carlo simulated set of prior age profiles to determine the best overall match and consequent volcano age assignments. Considering all 20 matches, and the multiple factors of age, direction, and size/distance simultaneously, results in some non-intuitive assignments which would not be produced by single factor analyses. Compared with earlier work, the results provide better age control on a number of smaller centers such as Little Rangitoto, Otuataua, Taylors Hill, Wiri Mountain, Green Hill, Otara Hill, Hampton Park and Mt Cambria. Spatio-temporal hazard estimates are updated on the basis of the new ordering, which suggest that the scale of the 'flare-up' around 30 ka, while still highly significant, was less than previously thought.
Al-lela, Omer Qutaiba B; Bahari, Mohd Baidi; Al-abbassi, Mustafa G; Salih, Muhannad R M; Basher, Amena Y
2012-06-01
The immunization status of children is improved by interventions that increase community demand for compulsory and non-compulsory vaccines, one of the most important interventions related to immunization providers. The aim of this study is to evaluate the activities of immunization providers in terms of activities time and cost, to calculate the immunization doses cost, and to determine the immunization dose errors cost. Time-motion and cost analysis study design was used. Five public health clinics in Mosul-Iraq participated in the study. Fifty (50) vaccine doses were required to estimate activities time and cost. Micro-costing method was used; time and cost data were collected for each immunization-related activity performed by the clinic staff. A stopwatch was used to measure the duration of activity interactions between the parents and clinic staff. The immunization service cost was calculated by multiplying the average salary/min by activity time per minute. 528 immunization cards of Iraqi children were scanned to determine the number and the cost of immunization doses errors (extraimmunization doses and invalid doses). The average time for child registration was 6.7 min per each immunization dose, and the physician spent more than 10 min per dose. Nurses needed more than 5 min to complete child vaccination. The total cost of immunization activities was 1.67 US$ per each immunization dose. Measles vaccine (fifth dose) has a lower price (0.42 US$) than all other immunization doses. The cost of a total of 288 invalid doses was 744.55 US$ and the cost of a total of 195 extra immunization doses was 503.85 US$. The time spent on physicians' activities was longer than that spent on registrars' and nurses' activities. Physician total cost was higher than registrar cost and nurse cost. The total immunization cost will increase by about 13.3% owing to dose errors.
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2003-10-29
Quantitative analysis of uptake and washout of cardiac single photon emission computed tomography (SPECT) radiopharmaceuticals has the potential to provide better contrast between healthy and diseased tissue, compared to conventional reconstruction of static images. Previously, we used B-splines to model time-activity curves (TACs) for segmented volumes of interest and developed fast least-squares algorithms to estimate spline TAC coefficients and their statistical uncertainties directly from dynamic SPECT projection data. This previous work incorporated physical effects of attenuation and depth-dependent collimator response. In the present work, we incorporate scatter and use a computer simulation to study how scatter modeling affects directly estimated TACs and subsequent estimates of compartmental model parameters. An idealized single-slice emission phantom was used to simulate a 15 min dynamic {sup 99m}Tc-teboroxime cardiac patient study in which 500,000 events containing scatter were detected from the slice. When scatter was modeled, unweighted least-squares estimates of TACs had root mean square (RMS) error that was less than 0.6% for normal left ventricular myocardium, blood pool, liver, and background tissue volumes and averaged 3% for two small myocardial defects. When scatter was not modeled, RMS error increased to average values of 16% for the four larger volumes and 35% for the small defects. Noise-to-signal ratios (NSRs) for TACs ranged between 1-18% for the larger volumes and averaged 110% for the small defects when scatter was modeled. When scatter was not modeled, NSR improved by average factors of 1.04 for the larger volumes and 1.25 for the small defects, as a result of the better-posed (though more biased) inverse problem. Weighted least-squares estimates of TACs had slightly better NSR and worse RMS error, compared to unweighted least-squares estimates. Compartmental model uptake and washout parameter estimates obtained from the TACs were less
Assessing the Likelihood of Rare Medical Events in Astronauts
NASA Technical Reports Server (NTRS)
Myers, Jerry G., Jr.; Leandowski, Beth E.; Brooker, John E.; Weaver, Aaron S.
2011-01-01
Despite over half a century of manned space flight, the space flight community is only now coming to fully assess the short and long term medical dangers of exposure to reduced gravity environments. Further, as new manned spacecraft are designed and with the advent of commercial flight capabilities to the general public, a full understanding of medical risk becomes even more critical for maintaining and understanding mission safety and crew health. To address these critical issues, the National Aeronautics and Space Administration (NASA) Human Research Program (HRP) has begun to address the medical hazards with a formalized risk management approach by effectively identifying and attempting to mitigate acute and chronic medical risks to manned space flight. This paper describes NASA Glenn Research Center?s (GRC) efforts to develop a systematic methodology to assess the likelihood of in-flight medical conditions. Using a probabilistic approach, medical risks are assessed using well established and accepted biomedical and human performance models in combination with fundamentally observed data that defines the astronauts? physical conditions, environment and activity levels. Two different examples of space flight risk are used to show the versatility of our approach and how it successfully integrates disparate information to provide HRP decision makers with a valuable source of information which is otherwise lacking.
Synergistic use of active and passive microwave in soil moisture estimation
NASA Technical Reports Server (NTRS)
O'Neill, P.; Chauhan, N.; Jackson, T.; Saatchi, S.
1992-01-01
Data gathered during the MACHYDRO experiment in central Pennsylvania in July 1990 have been utilized to study the synergistic use of active and passive microwave systems for estimating soil moisture. These data sets were obtained during an eleven-day period with NASA's Airborne Synthetic Aperture Radar (AIRSAR) and Push-Broom Microwave Radiometer (PBMR) over an instrumented watershed which included agricultural fields with a number of different crop covers. Simultaneous ground truth measurements were also made in order to characterize the state of vegetation and soil moisture under a variety of meteorological conditions. A combination algorithm is presented as applied to a representative corn field in the MACHYDRO watershed.
Maximum likelihood analysis of rare binary traits under different modes of inheritance.
Thaller, G; Dempfle, L; Hoeschele, I
1996-08-01
Maximum likelihood methodology was applied to determine the mode of inheritance of rare binary traits with data structures typical for swine populations. The genetic models considered included a monogenic, a digenic, a polygenic, and three mixed polygenic and major gene models. The main emphasis was on the detection of major genes acting on a polygenic background. Deterministic algorithms were employed to integrate and maximize likelihoods. A simulation study was conducted to evaluate model selection and parameter estimation. Three designs were simulated that differed in the number of sires/number of dams within sires (10/10, 30/30, 100/30). Major gene effects of at least one SD of the liability were detected with satisfactory power under the mixed model of inheritance, except for the smallest design. Parameter estimates were empirically unbiased with acceptable standard errors, except for the smallest design, and allowed to distinguish clearly between the genetic models. Distributions of the likelihood ratio statistic were evaluated empirically, because asymptotic theory did not hold. For each simulation model, the Average Information Criterion was computed for all models of analysis. The model with the smallest value was chosen as the best model and was equal to the true model in almost every case studied.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Spierer, David K; Hagins, Marshall; Rundle, Andrew; Pappas, Evangelos
2011-04-01
Combining accelerometry with heart rate monitoring has been suggested to improve energy estimates, however, it remains unclear whether the single, currently existing commercially available device combining these data streams (Actiheart) provides improved energy estimates compared to simpler and less expensive accelerometry-only devices. The purpose of this study was to compare the validity of the heart rate (HR), accelerometry (ACC), and combined ACC/HR estimates of the Actiheart to the ACC estimates of the Actical during low and moderate intensity activities. Twenty-seven participants (mean age 26.3 ± 7.3) wore an Actical, Actiheart and indirect calorimeter (K4b(2)) while performing card playing, sweeping, lifting weights, walking and jogging activities. All estimates tended to underestimate energy, sometimes by substantial amounts. Viewed across all activities studied, there was no significant difference in the ability of the waist-mounted Actical and torso-mounted Actiheart (ACC, HR, ACC/HR) estimates to predict energy expenditure. However, the Actiheart provided significantly better estimates than the Actical for the activities in which acceleration of the pelvis is not closely related to energy expenditure (card playing, sweeping, lifting weights) and the Actical provided significantly better estimates for level walking and level jogging. Similar to a previous study, the ACC component of the Actiheart was found to be the weakest predictor of energy suggesting it may be responsible for the failure of the combined ACC/HR estimate to equal or better the estimates derived solely from a waist mounted ACC device.
Sidhu, Simranjit K; Bentley, David J; Carroll, Timothy J
2009-02-01
The objective of this study was to determine if a transcranial magnetic stimulation (TMS) method of quantifying the degree to which the motor cortex drives the muscles during voluntary efforts can be reliably applied to the human knee extensors. Although the technique for estimating "cortical" voluntary activation (VA) is valid and reliable for elbow flexors and wrist extensors, evidence that it can be applied to muscles of the lower limb is necessary if twitch interpolation with TMS is to be widely used in research or clinical practice. Eight subjects completed two identical test sessions involving brief isometric knee extensions at forces ranging from rest to maximal voluntary contraction (MVC). Electromyographic (EMG) responses to TMS of the motor cortex and electrical stimulation of the femoral nerve were recorded from the rectus femoris (RF) and biceps femoris (BF) muscles, and knee extension twitch forces evoked by stimulation were measured. The amplitude of TMS-evoked twitch forces decreased linearly between 25% and 100% MVC (r(2) > 0.9), and produced reliable estimations of resting twitch and VA (ICC(2,1) > 0.85). The reliability and size of cortical measures of VA were comparable to those derived from motor nerve stimulation when the resting twitches were estimated on the basis of as few as three TMS trials. Thus, TMS measures of VA may provide a reliable and valid tool in studies investigating central fatigue due to exercise and neurological deficits in neural drive in the lower limbs. PMID:19034956
Use of prediction methods to estimate true density of active pharmaceutical ingredients.
Cao, Xiaoping; Leyva, Norma; Anderson, Stephen R; Hancock, Bruno C
2008-05-01
True density is a fundamental and important property of active pharmaceutical ingredients (APIs). Using prediction methods to estimate the API true density can be very beneficial in pharmaceutical research and development, especially when experimental measurements cannot be made due to lack of material or sample handling restrictions. In this paper, two empirical prediction methods developed by Girolami and Immirzi and Perini were used to estimate the true density of APIs, and the estimation results were compared with experimentally measured values by helium pycnometry. The Girolami method is simple and can be used for both liquids and solids. For the tested APIs, the Girolami method had a maximum error of -12.7% and an average percent error of -3.0% with a 95% CI of (-3.8, -2.3%). The Immirzi and Perini method is more involved and is mainly used for solid crystals. In general, it gives better predictions than the Girolami method. For the tested APIs, the Immirzi and Perini method had a maximum error of 9.6% and an average percent error of 0.9% with a 95% CI of (0.3, 1.6%). PMID:18242023