Science.gov

Sample records for activation likelihood estimate

  1. Identification of Human Gustatory Cortex by Activation Likelihood Estimation

    PubMed Central

    Veldhuizen, Maria G.; Albrecht, Jessica; Zelano, Christina; Boesveldt, Sanne; Breslin, Paul; Lundström, Johan N.

    2010-01-01

    Over the last two decades, neuroimaging methods have identified a variety of taste-responsive brain regions. Their precise location, however, remains in dispute. For example, taste stimulation activates areas throughout the insula and overlying operculum, but identification of subregions has been inconsistent. Furthermore, literature reviews and summaries of gustatory brain activations tend to reiterate rather than resolve this ambiguity. Here we used a new meta-analytic method [activation likelihood estimation (ALE)] to obtain a probability map of the location of gustatory brain activation across fourteen studies. The map of activation likelihood values can also serve as a source of independent coordinates for future region-of-interest analyses. We observed significant cortical activation probabilities in: bilateral anterior insula and overlying frontal operculum, bilateral mid dorsal insula and overlying Rolandic operculum, and bilateral posterior insula/parietal operculum/postcentral gyrus, left lateral orbitofrontal cortex (OFC), right medial OFC, pregenual anterior cingulate cortex (prACC) and right mediodorsal thalamus. This analysis confirms the involvement of multiple cortical areas within insula and overlying operculum in gustatory processing and provides a functional “taste map” which can be used as an inclusive mask in the data analyses of future studies. In light of this new analysis, we discuss human central processing of gustatory stimuli and identify topics where increased research effort is warranted. PMID:21305668

  2. Joint maximum likelihood estimation of activation and Hemodynamic Response Function for fMRI.

    PubMed

    Bazargani, Negar; Nosratinia, Aria

    2014-07-01

    Blood Oxygen Level Dependent (BOLD) functional magnetic resonance imaging (fMRI) maps the brain activity by measuring blood oxygenation level, which is related to brain activity via a temporal impulse response function known as the Hemodynamic Response Function (HRF). The HRF varies from subject to subject and within areas of the brain, therefore a knowledge of HRF is necessary for accurately computing voxel activations. Conversely a knowledge of active voxels is highly beneficial for estimating the HRF. This work presents a joint maximum likelihood estimation of HRF and activation based on low-rank matrix approximations operating on regions of interest (ROI). Since each ROI has limited data, a smoothing constraint on the HRF is employed via Tikhonov regularization. The method is analyzed under both white noise and colored noise. Experiments with synthetic data show that accurate estimation of the HRF is possible with this method without prior assumptions on the exact shape of the HRF. Further experiments involving real fMRI experiments with auditory stimuli are used to validate the proposed method.

  3. Neuroimaging of Reading Intervention: A Systematic Review and Activation Likelihood Estimate Meta-Analysis

    PubMed Central

    Barquero, Laura A.; Davis, Nicole; Cutting, Laurie E.

    2014-01-01

    A growing number of studies examine instructional training and brain activity. The purpose of this paper is to review the literature regarding neuroimaging of reading intervention, with a particular focus on reading difficulties (RD). To locate relevant studies, searches of peer-reviewed literature were conducted using electronic databases to search for studies from the imaging modalities of fMRI and MEG (including MSI) that explored reading intervention. Of the 96 identified studies, 22 met the inclusion criteria for descriptive analysis. A subset of these (8 fMRI experiments with post-intervention data) was subjected to activation likelihood estimate (ALE) meta-analysis to investigate differences in functional activation following reading intervention. Findings from the literature review suggest differences in functional activation of numerous brain regions associated with reading intervention, including bilateral inferior frontal, superior temporal, middle temporal, middle frontal, superior frontal, and postcentral gyri, as well as bilateral occipital cortex, inferior parietal lobules, thalami, and insulae. Findings from the meta-analysis indicate change in functional activation following reading intervention in the left thalamus, right insula/inferior frontal, left inferior frontal, right posterior cingulate, and left middle occipital gyri. Though these findings should be interpreted with caution due to the small number of studies and the disparate methodologies used, this paper is an effort to synthesize across studies and to guide future exploration of neuroimaging and reading intervention. PMID:24427278

  4. Neuroimaging of reading intervention: a systematic review and activation likelihood estimate meta-analysis.

    PubMed

    Barquero, Laura A; Davis, Nicole; Cutting, Laurie E

    2014-01-01

    A growing number of studies examine instructional training and brain activity. The purpose of this paper is to review the literature regarding neuroimaging of reading intervention, with a particular focus on reading difficulties (RD). To locate relevant studies, searches of peer-reviewed literature were conducted using electronic databases to search for studies from the imaging modalities of fMRI and MEG (including MSI) that explored reading intervention. Of the 96 identified studies, 22 met the inclusion criteria for descriptive analysis. A subset of these (8 fMRI experiments with post-intervention data) was subjected to activation likelihood estimate (ALE) meta-analysis to investigate differences in functional activation following reading intervention. Findings from the literature review suggest differences in functional activation of numerous brain regions associated with reading intervention, including bilateral inferior frontal, superior temporal, middle temporal, middle frontal, superior frontal, and postcentral gyri, as well as bilateral occipital cortex, inferior parietal lobules, thalami, and insulae. Findings from the meta-analysis indicate change in functional activation following reading intervention in the left thalamus, right insula/inferior frontal, left inferior frontal, right posterior cingulate, and left middle occipital gyri. Though these findings should be interpreted with caution due to the small number of studies and the disparate methodologies used, this paper is an effort to synthesize across studies and to guide future exploration of neuroimaging and reading intervention.

  5. The Autonomic Brain: An Activation Likelihood Estimation Meta-Analysis for Central Processing of Autonomic Function

    PubMed Central

    Meissner, Karin; Bär, Karl-Jürgen; Napadow, Vitaly

    2013-01-01

    The autonomic nervous system (ANS) is of paramount importance for daily life. Its regulatory action on respiratory, cardiovascular, digestive, endocrine, and many other systems is controlled by a number of structures in the CNS. While the majority of these nuclei and cortices have been identified in animal models, neuroimaging studies have recently begun to shed light on central autonomic processing in humans. In this study, we used activation likelihood estimation to conduct a meta-analysis of human neuroimaging experiments evaluating central autonomic processing to localize (1) cortical and subcortical areas involved in autonomic processing, (2) potential subsystems for the sympathetic and parasympathetic divisions of the ANS, and (3) potential subsystems for specific ANS responses to different stimuli/tasks. Across all tasks, we identified a set of consistently activated brain regions, comprising left amygdala, right anterior and left posterior insula and midcingulate cortices that form the core of the central autonomic network. While sympathetic-associated regions predominate in executive- and salience-processing networks, parasympathetic regions predominate in the default mode network. Hence, central processing of autonomic function does not simply involve a monolithic network of brain regions, instead showing elements of task and division specificity. PMID:23785162

  6. Cortical Midline Structures and Autobiographical-Self Processes: An Activation-Likelihood Estimation Meta-Analysis

    PubMed Central

    Araujo, Helder F.; Kaplan, Jonas; Damasio, Antonio

    2013-01-01

    The autobiographical-self refers to a mental state derived from the retrieval and assembly of memories regarding one’s biography. The process of retrieval and assembly, which can focus on biographical facts or personality traits or some combination thereof, is likely to vary according to the domain chosen for an experiment. To date, the investigation of the neural basis of this process has largely focused on the domain of personality traits using paradigms that contrasted the evaluation of one’s traits (self-traits) with those of another person’s (other-traits). This has led to the suggestion that cortical midline structures (CMSs) are specifically related to self states. Here, with the goal of testing this suggestion, we conducted activation-likelihood estimation (ALE) meta-analyses based on data from 28 neuroimaging studies. The ALE results show that both self-traits and other-traits engage CMSs; however, the engagement of medial prefrontal cortex is greater for self-traits than for other-traits, while the posteromedial cortex is more engaged for other-traits than for self-traits. These findings suggest that the involvement CMSs is not specific to the evaluation of one’s own traits, but also occurs during the evaluation of another person’s traits. PMID:24027520

  7. Gray matter atrophy in narcolepsy: An activation likelihood estimation meta-analysis.

    PubMed

    Weng, Hsu-Huei; Chen, Chih-Feng; Tsai, Yuan-Hsiung; Wu, Chih-Ying; Lee, Meng; Lin, Yu-Ching; Yang, Cheng-Ta; Tsai, Ying-Huang; Yang, Chun-Yuh

    2015-12-01

    The authors reviewed the literature on the use of voxel-based morphometry (VBM) in narcolepsy magnetic resonance imaging (MRI) studies via the use of a meta-analysis of neuroimaging to identify concordant and specific structural deficits in patients with narcolepsy as compared with healthy subjects. We used PubMed to retrieve articles published between January 2000 and March 2014. The authors included all VBM research on narcolepsy and compared the findings of the studies by using gray matter volume (GMV) or gray matter concentration (GMC) to index differences in gray matter. Stereotactic data were extracted from 8 VBM studies of 149 narcoleptic patients and 162 control subjects. We applied activation likelihood estimation (ALE) technique and found significant regional gray matter reduction in the bilateral hypothalamus, thalamus, globus pallidus, extending to nucleus accumbens (NAcc) and anterior cingulate cortex (ACC), left mid orbital and rectal gyri (BAs 10 and 11), right inferior frontal gyrus (BA 47), and the right superior temporal gyrus (BA 41) in patients with narcolepsy. The significant gray matter deficits in narcoleptic patients occurred in the bilateral hypothalamus and frontotemporal regions, which may be related to the emotional processing abnormalities and orexin/hypocretin pathway common among populations of patients with narcolepsy.

  8. A meta-analysis of neuroimaging studies on divergent thinking using activation likelihood estimation.

    PubMed

    Wu, Xin; Yang, Wenjing; Tong, Dandan; Sun, Jiangzhou; Chen, Qunlin; Wei, Dongtao; Zhang, Qinglin; Zhang, Meng; Qiu, Jiang

    2015-07-01

    In this study, an activation likelihood estimation (ALE) meta-analysis was used to conduct a quantitative investigation of neuroimaging studies on divergent thinking. Based on the ALE results, the functional magnetic resonance imaging (fMRI) studies showed that distributed brain regions were more active under divergent thinking tasks (DTTs) than those under control tasks, but a large portion of the brain regions were deactivated. The ALE results indicated that the brain networks of the creative idea generation in DTTs may be composed of the lateral prefrontal cortex, posterior parietal cortex [such as the inferior parietal lobule (BA 40) and precuneus (BA 7)], anterior cingulate cortex (ACC) (BA 32), and several regions in the temporal cortex [such as the left middle temporal gyrus (BA 39), and left fusiform gyrus (BA 37)]. The left dorsolateral prefrontal cortex (BA 46) was related to selecting the loosely and remotely associated concepts and organizing them into creative ideas, whereas the ACC (BA 32) was related to observing and forming distant semantic associations in performing DTTs. The posterior parietal cortex may be involved in the semantic information related to the retrieval and buffering of the formed creative ideas, and several regions in the temporal cortex may be related to the stored long-term memory. In addition, the ALE results of the structural studies showed that divergent thinking was related to the dopaminergic system (e.g., left caudate and claustrum). Based on the ALE results, both fMRI and structural MRI studies could uncover the neural basis of divergent thinking from different aspects (e.g., specific cognitive processing and stable individual difference of cognitive capability).

  9. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    ERIC Educational Resources Information Center

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  10. Minimizing within-experiment and within-group effects in Activation Likelihood Estimation meta-analyses.

    PubMed

    Turkeltaub, Peter E; Eickhoff, Simon B; Laird, Angela R; Fox, Mick; Wiener, Martin; Fox, Peter

    2012-01-01

    Activation Likelihood Estimation (ALE) is an objective, quantitative technique for coordinate-based meta-analysis (CBMA) of neuroimaging results that has been validated for a variety of uses. Stepwise modifications have improved ALE's theoretical and statistical rigor since its introduction. Here, we evaluate two avenues to further optimize ALE. First, we demonstrate that the maximum contribution of an experiment makes to an ALE map is related to the number of foci it reports and their proximity. We present a modified ALE algorithm that eliminates these within-experiment effects. However, we show that these effects only account for 2-3% of cumulative ALE values, and removing them has little impact on thresholded ALE maps. Next, we present an alternate organizational approach to datasets that prevents subject groups with multiple experiments in a dataset from influencing ALE values more than others. This modification decreases cumulative ALE values by 7-9%, changes the relative magnitude of some clusters, and reduces cluster extents. Overall, differences between results of the standard approach and these new methods were small. This finding validates previous ALE reports against concerns that they were driven by within-experiment or within-group effects. We suggest that the modified ALE algorithm is theoretically advantageous compared with the current algorithm, and that the alternate organization of datasets is the most conservative approach for typical ALE analyses and other CBMA methods. Combining the two modifications minimizes both within-experiment and within-group effects, optimizing the degree to which ALE values represent concordance of findings across independent reports.

  11. The Sherpa Maximum Likelihood Estimator

    NASA Astrophysics Data System (ADS)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  12. An Activation Likelihood Estimation Meta-Analysis Study of Simple Motor Movements in Older and Young Adults

    PubMed Central

    Turesky, Ted K.; Turkeltaub, Peter E.; Eden, Guinevere F.

    2016-01-01

    The functional neuroanatomy of finger movements has been characterized with neuroimaging in young adults. However, less is known about the aging motor system. Several studies have contrasted movement-related activity in older versus young adults, but there is inconsistency among their findings. To address this, we conducted an activation likelihood estimation (ALE) meta-analysis on within-group data from older adults and young adults performing regularly paced right-hand finger movement tasks in response to external stimuli. We hypothesized that older adults would show a greater likelihood of activation in right cortical motor areas (i.e., ipsilateral to the side of movement) compared to young adults. ALE maps were examined for conjunction and between-group differences. Older adults showed overlapping likelihoods of activation with young adults in left primary sensorimotor cortex (SM1), bilateral supplementary motor area, bilateral insula, left thalamus, and right anterior cerebellum. Their ALE map differed from that of the young adults in right SM1 (extending into dorsal premotor cortex), right supramarginal gyrus, medial premotor cortex, and right posterior cerebellum. The finding that older adults uniquely use ipsilateral regions for right-hand finger movements and show age-dependent modulations in regions recruited by both age groups provides a foundation by which to understand age-related motor decline and motor disorders. PMID:27799910

  13. An Activation Likelihood Estimation Meta-Analysis Study of Simple Motor Movements in Older and Young Adults.

    PubMed

    Turesky, Ted K; Turkeltaub, Peter E; Eden, Guinevere F

    2016-01-01

    The functional neuroanatomy of finger movements has been characterized with neuroimaging in young adults. However, less is known about the aging motor system. Several studies have contrasted movement-related activity in older versus young adults, but there is inconsistency among their findings. To address this, we conducted an activation likelihood estimation (ALE) meta-analysis on within-group data from older adults and young adults performing regularly paced right-hand finger movement tasks in response to external stimuli. We hypothesized that older adults would show a greater likelihood of activation in right cortical motor areas (i.e., ipsilateral to the side of movement) compared to young adults. ALE maps were examined for conjunction and between-group differences. Older adults showed overlapping likelihoods of activation with young adults in left primary sensorimotor cortex (SM1), bilateral supplementary motor area, bilateral insula, left thalamus, and right anterior cerebellum. Their ALE map differed from that of the young adults in right SM1 (extending into dorsal premotor cortex), right supramarginal gyrus, medial premotor cortex, and right posterior cerebellum. The finding that older adults uniquely use ipsilateral regions for right-hand finger movements and show age-dependent modulations in regions recruited by both age groups provides a foundation by which to understand age-related motor decline and motor disorders.

  14. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    PubMed Central

    LaCroix, Arianna N.; Diaz, Alvaro F.; Rogalsky, Corianne

    2015-01-01

    The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music. PMID:26321976

  15. Reinforcement learning models and their neural correlates: An activation likelihood estimation meta-analysis.

    PubMed

    Chase, Henry W; Kumar, Poornima; Eickhoff, Simon B; Dombrovski, Alexandre Y

    2015-06-01

    Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments-prediction error-is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies have suggested that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that had employed algorithmic reinforcement learning models across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, whereas instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies.

  16. Reinforcement Learning Models and Their Neural Correlates: An Activation Likelihood Estimation Meta-Analysis

    PubMed Central

    Kumar, Poornima; Eickhoff, Simon B.; Dombrovski, Alexandre Y.

    2015-01-01

    Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments – prediction error – is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies suggest that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that employed algorithmic reinforcement learning models, across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, while instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually-estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies. PMID:25665667

  17. Localising semantic and syntactic processing in spoken and written language comprehension: an Activation Likelihood Estimation meta-analysis.

    PubMed

    Rodd, Jennifer M; Vitello, Sylvia; Woollams, Anna M; Adank, Patti

    2015-02-01

    We conducted an Activation Likelihood Estimation (ALE) meta-analysis to identify brain regions that are recruited by linguistic stimuli requiring relatively demanding semantic or syntactic processing. We included 54 functional MRI studies that explicitly varied the semantic or syntactic processing load, while holding constant demands on earlier stages of processing. We included studies that introduced a syntactic/semantic ambiguity or anomaly, used a priming manipulation that specifically reduced the load on semantic/syntactic processing, or varied the level of syntactic complexity. The results confirmed the critical role of the posterior left Inferior Frontal Gyrus (LIFG) in semantic and syntactic processing. These results challenge models of sentence comprehension highlighting the role of anterior LIFG for semantic processing. In addition, the results emphasise the posterior (but not anterior) temporal lobe for both semantic and syntactic processing.

  18. Stuttering, Induced Fluency, and Natural Fluency: A Hierarchical Series of Activation Likelihood Estimation Meta-Analyses

    PubMed Central

    Budde, Kristin S.; Barron, Daniel S.; Fox, Peter T.

    2015-01-01

    Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as “neural signatures of stuttering” (Brown 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: 1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and 2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state). PMID:25463820

  19. Stuttering, induced fluency, and natural fluency: a hierarchical series of activation likelihood estimation meta-analyses.

    PubMed

    Budde, Kristin S; Barron, Daniel S; Fox, Peter T

    2014-12-01

    Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as "neural signatures of stuttering" (Brown, Ingham, Ingham, Laird, & Fox, 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: (1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and (2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state).

  20. The neural network for tool-related cognition: An activation likelihood estimation meta-analysis of 70 neuroimaging contrasts

    PubMed Central

    Ishibashi, Ryo; Pobric, Gorana; Saito, Satoru; Lambon Ralph, Matthew A.

    2016-01-01

    ABSTRACT The ability to recognize and use a variety of tools is an intriguing human cognitive function. Multiple neuroimaging studies have investigated neural activations with various types of tool-related tasks. In the present paper, we reviewed tool-related neural activations reported in 70 contrasts from 56 neuroimaging studies and performed a series of activation likelihood estimation (ALE) meta-analyses to identify tool-related cortical circuits dedicated either to general tool knowledge or to task-specific processes. The results indicate the following: (a) Common, task-general processing regions for tools are located in the left inferior parietal lobule (IPL) and ventral premotor cortex; and (b) task-specific regions are located in superior parietal lobule (SPL) and dorsal premotor area for imagining/executing actions with tools and in bilateral occipito-temporal cortex for recognizing/naming tools. The roles of these regions in task-general and task-specific activities are discussed with reference to evidence from neuropsychology, experimental psychology and other neuroimaging studies. PMID:27362967

  1. Coordinate based meta-analysis of functional neuroimaging data using activation likelihood estimation; full width half max and group comparisons.

    PubMed

    Tench, Christopher R; Tanasescu, Radu; Auer, Dorothee P; Cottam, William J; Constantinescu, Cris S

    2014-01-01

    Coordinate based meta-analysis (CBMA) is used to find regions of consistent activation across fMRI and PET studies selected for their functional relevance to a hypothesis. Results are clusters of foci where multiple studies report in the same spatial region, indicating functional relevance. Contrast meta-analysis finds regions where there are consistent differences in activation pattern between two groups. The activation likelihood estimate methods tackle these problems, but require a specification of uncertainty in foci location: the full width half max (FWHM). Results are sensitive to FWHM. Furthermore, contrast meta-analysis requires correction for multiple statistical tests. Consequently it is sensitive only to very significant localised differences that produce very small p-values, which remain significant after correction; subtle diffuse differences between the groups can be overlooked. In this report we redefine the FWHM parameter, by analogy with a density clustering algorithm, and provide a method to estimate it. The FWHM is modified to account for the number of studies in the analysis, and represents a substantial change to the CBMA philosophy that can be applied to the current algorithms. Consequently we observe more reliable detection of clusters when there are few studies in the CBMA, and a decreasing false positive rate with larger study numbers. By contrast the standard definition (FWHM independent of the number of studies) is demonstrated to paradoxically increase the false positive rate as the number of studies increases, while reducing ability to detect true clusters for small numbers of studies. We also provide an algorithm for contrast meta-analysis, which includes a correction for multiple correlated tests that controls for the proportion of false clusters expected under the null hypothesis. Furthermore, we detail an omnibus test of difference between groups that is more sensitive than contrast meta-analysis when differences are diffuse. This

  2. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)

    PubMed Central

    Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128

  3. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).

    PubMed

    Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies.

  4. Maximum Likelihood and Bayesian Parameter Estimation in Item Response Theory.

    ERIC Educational Resources Information Center

    Lord, Frederic M.

    There are currently three main approaches to parameter estimation in item response theory (IRT): (1) joint maximum likelihood, exemplified by LOGIST, yielding maximum likelihood estimates; (2) marginal maximum likelihood, exemplified by BILOG, yielding maximum likelihood estimates of item parameters (ability parameters can be estimated…

  5. Maximum likelihood estimates of polar motion parameters

    NASA Technical Reports Server (NTRS)

    Wilson, Clark R.; Vicente, R. O.

    1990-01-01

    Two estimators developed by Jeffreys (1940, 1968) are described and used in conjunction with polar-motion data to determine the frequency (Fc) and quality factor (Qc) of the Chandler wobble. Data are taken from a monthly polar-motion series, satellite laser-ranging results, and optical astrometry and intercompared for use via interpolation techniques. Maximum likelihood arguments were employed to develop the estimators, and the assumption that polar motion relates to a Gaussian random process is assessed in terms of the accuracies of the estimators. The present results agree with those from Jeffreys' earlier study but are inconsistent with the later estimator; a Monte Carlo evaluation of the estimators confirms that the 1968 method is more accurate. The later estimator method shows good performance because the Fourier coefficients derived from the data have signal/noise levels that are superior to those for an individual datum. The method is shown to be valuable for general spectral-analysis problems in which isolated peaks must be analyzed from noisy data.

  6. Maximum Likelihood Estimation of Multivariate Polyserial and Polychoric Correlation Coefficients.

    ERIC Educational Resources Information Center

    Poon, Wai-Yin; Lee, Sik-Yum

    1987-01-01

    Reparameterization is used to find the maximum likelihood estimates of parameters in a multivariate model having some component variable observable only in polychotomous form. Maximum likelihood estimates are found by a Fletcher Powell algorithm. In addition, the partition maximum likelihood method is proposed and illustrated. (Author/GDC)

  7. Correlation Between Brain Activation Changes and Cognitive Improvement Following Cognitive Remediation Therapy in Schizophrenia: An Activation Likelihood Estimation Meta-analysis

    PubMed Central

    Wei, Yan-Yan; Wang, Ji-Jun; Yan, Chao; Li, Zi-Qiang; Pan, Xiao; Cui, Yi; Su, Tong; Liu, Tao-Sheng; Tang, Yun-Xiang

    2016-01-01

    Background: Several studies using functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have indicated that cognitive remediation therapy (CRT) might improve cognitive function by changing brain activations in patients with schizophrenia. However, the results were not consistent in these changed brain areas in different studies. The present activation likelihood estimation (ALE) meta-analysis was conducted to investigate whether cognitive function change was accompanied by the brain activation changes, and where the main areas most related to these changes were in schizophrenia patients after CRT. Analyses of whole-brain studies and whole-brain + region of interest (ROI) studies were compared to explore the effect of the different methodologies on the results. Methods: A computerized systematic search was conducted to collect fMRI and PET studies on brain activation changes in schizophrenia patients from pre- to post-CRT. Nine studies using fMRI techniques were included in the meta-analysis. Ginger ALE 2.3.1 was used to perform meta-analysis across these imaging studies. Results: The main areas with increased brain activation were in frontal and parietal lobe, including left medial frontal gyrus, left inferior frontal gyrus, right middle frontal gyrus, right postcentral gyrus, and inferior parietal lobule in patients after CRT, yet no decreased brain activation was found. Although similar increased activation brain areas were identified in ALE with or without ROI studies, analysis including ROI studies had a higher ALE value. Conclusions: The current findings suggest that CRT might improve the cognition of schizophrenia patients by increasing activations of the frontal and parietal lobe. In addition, it might provide more evidence to confirm results by including ROI studies in ALE meta-analysis. PMID:26904993

  8. Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.

    DTIC Science & Technology

    1986-05-01

    consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol

  9. Neural signatures of social conformity: A coordinate-based activation likelihood estimation meta-analysis of functional brain imaging studies.

    PubMed

    Wu, Haiyan; Luo, Yi; Feng, Chunliang

    2016-12-01

    People often align their behaviors with group opinions, known as social conformity. Many neuroscience studies have explored the neuropsychological mechanisms underlying social conformity. Here we employed a coordinate-based meta-analysis on neuroimaging studies of social conformity with the purpose to reveal the convergence of the underlying neural architecture. We identified a convergence of reported activation foci in regions associated with normative decision-making, including ventral striatum (VS), dorsal posterior medial frontal cortex (dorsal pMFC), and anterior insula (AI). Specifically, consistent deactivation of VS and activation of dorsal pMFC and AI are identified when people's responses deviate from group opinions. In addition, the deviation-related responses in dorsal pMFC predict people's conforming behavioral adjustments. These are consistent with current models that disagreement with others might evoke "error" signals, cognitive imbalance, and/or aversive feelings, which are plausibly detected in these brain regions as control signals to facilitate subsequent conforming behaviors. Finally, group opinions result in altered neural correlates of valuation, manifested as stronger responses of VS to stimuli endorsed than disliked by others.

  10. Nonlinear Statistical Estimation with Numerical Maximum Likelihood

    DTIC Science & Technology

    1974-10-01

    8217^ ■ -mpw. ""l<m jiii^iHJl*!!".ii J,,-^ I WtWJaM«-« wv.^^,.-. B, INTRODUCTION TO STATISTICAL ESTIMATION THEORY A classical area of intense interest...information about the nature of the error, e. This technique is known as regression. One example of such a model is classical linear regression...the only reasonable estimation alternative. Also, for the classic linear normal model, the M.L.E. provides the L.S. solution. For small samples

  11. Maximum Marginal Likelihood Estimation for Semiparametric Item Analysis.

    ERIC Educational Resources Information Center

    Ramsay, J. O.; Winsberg, S.

    1991-01-01

    A method is presented for estimating the item characteristic curve (ICC) using polynomial regression splines. Estimation of spline ICCs is described by maximizing the marginal likelihood formed by integrating ability over a beta prior distribution. Simulation results compare this approach with the joint estimation of ability and item parameters.…

  12. Likelihood Ratio Gradient Estimation: An Overview

    DTIC Science & Technology

    1987-10-01

    LIKEIHOD RTIO RADENTTechnical Report LIKEIHOD RAIO RADINT6. PERMN~Gw ORG. REPORT NUMBER ESTIMATION: AN OVERVIEW 7. AuTmOR(e) 6. CONTRACT Olt GRANT...distribution F. and ser- vice distribution F. are unknown. Suppose that one is given The situation described above in the single-server queueing data X , X ... X .. fr th, inter-arriva’ t:rns .4i . observations context is typical of many sLaListcal problems that arise in the Y ,...,Y_ for the service

  13. A Meta-analysis on the neural basis of planning: Activation likelihood estimation of functional brain imaging results in the Tower of London task.

    PubMed

    Nitschke, Kai; Köstering, Lena; Finkel, Lisa; Weiller, Cornelius; Kaller, Christoph P

    2017-01-01

    The ability to mentally design and evaluate series of future actions has often been studied in terms of planning abilities, commonly using well-structured laboratory tasks like the Tower of London (ToL). Despite a wealth of studies, findings on the specific localization of planning processes within prefrontal cortex (PFC) and on the hemispheric lateralization are equivocal. Here, we address this issue by integrating evidence from two different sources of data: First, we provide a systematic overview of the existing lesion data on planning in the ToL (10 studies, 211 patients) which does not indicate any evidence for a general lateralization of planning processes in (pre)frontal cortex. Second, we report a quantitative meta-analysis with activation likelihood estimation based on 31 functional neuroimaging datasets on the ToL. Separate meta-analyses of the activation patterns reported for Overall Planning (537 participants) and for Planning Complexity (182 participants) congruently show bilateral contributions of mid-dorsolateral PFC, frontal eye fields, supplementary motor area, precuneus, caudate, anterior insula, and inferior parietal cortex in addition to a left-lateralized involvement of rostrolateral PFC. In contrast to previous attributions of planning-related brain activity to the entire dorsolateral prefrontal cortex (dlPFC) and either its left or right homolog derived from single studies on the ToL, the present meta-analyses stress the pivotal role specifically of the mid-dorsolateral part of PFC (mid-dlPFC), presumably corresponding to Brodmann Areas 46 and 9/46, and strongly argue for a bilateral rather than lateralized involvement of the dlPFC in planning in the ToL. Hum Brain Mapp 38:396-413, 2017. © 2016 Wiley Periodicals, Inc.

  14. Properties of maximum likelihood male fertility estimation in plant populations.

    PubMed Central

    Morgan, M T

    1998-01-01

    Computer simulations are used to evaluate maximum likelihood methods for inferring male fertility in plant populations. The maximum likelihood method can provide substantial power to characterize male fertilities at the population level. Results emphasize, however, the importance of adequate experimental design and evaluation of fertility estimates, as well as limitations to inference (e.g., about the variance in male fertility or the correlation between fertility and phenotypic trait value) that can be reasonably drawn. PMID:9611217

  15. Mapping the “What” and “Where” Visual Cortices and Their Atrophy in Alzheimer's Disease: Combined Activation Likelihood Estimation with Voxel-Based Morphometry

    PubMed Central

    Deng, Yanjia; Shi, Lin; Lei, Yi; Liang, Peipeng; Li, Kuncheng; Chu, Winnie C. W.; Wang, Defeng

    2016-01-01

    The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing “what” and “where” visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in “what” and “where” visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three “where” visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both “what” and “where” visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses. PMID:27445770

  16. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  17. Nonparametric maximum likelihood estimation for the multisample Wicksell corpuscle problem

    PubMed Central

    Chan, Kwun Chuen Gary; Qin, Jing

    2016-01-01

    We study nonparametric maximum likelihood estimation for the distribution of spherical radii using samples containing a mixture of one-dimensional, two-dimensional biased and three-dimensional unbiased observations. Since direct maximization of the likelihood function is intractable, we propose an expectation-maximization algorithm for implementing the estimator, which handles an indirect measurement problem and a sampling bias problem separately in the E- and M-steps, and circumvents the need to solve an Abel-type integral equation, which creates numerical instability in the one-sample problem. Extensions to ellipsoids are studied and connections to multiplicative censoring are discussed. PMID:27279657

  18. Empirical Likelihood for Estimating Equations with Nonignorably Missing Data.

    PubMed

    Tang, Niansheng; Zhao, Puying; Zhu, Hongtu

    2014-04-01

    We develop an empirical likelihood (EL) inference on parameters in generalized estimating equations with nonignorably missing response data. We consider an exponential tilting model for the nonignorably missing mechanism, and propose modified estimating equations by imputing missing data through a kernel regression method. We establish some asymptotic properties of the EL estimators of the unknown parameters under different scenarios. With the use of auxiliary information, the EL estimators are statistically more efficient. Simulation studies are used to assess the finite sample performance of our proposed EL estimators. We apply our EL estimators to investigate a data set on earnings obtained from the New York Social Indicators Survey.

  19. Maximum-likelihood estimation of admixture proportions from genetic data.

    PubMed Central

    Wang, Jinliang

    2003-01-01

    For an admixed population, an important question is how much genetic contribution comes from each parental population. Several methods have been developed to estimate such admixture proportions, using data on genetic markers sampled from parental and admixed populations. In this study, I propose a likelihood method to estimate jointly the admixture proportions, the genetic drift that occurred to the admixed population and each parental population during the period between the hybridization and sampling events, and the genetic drift in each ancestral population within the interval between their split and hybridization. The results from extensive simulations using various combinations of relevant parameter values show that in general much more accurate and precise estimates of admixture proportions are obtained from the likelihood method than from previous methods. The likelihood method also yields reasonable estimates of genetic drift that occurred to each population, which translate into relative effective sizes (N(e)) or absolute average N(e)'s if the times when the relevant events (such as population split, admixture, and sampling) occurred are known. The proposed likelihood method also has features such as relatively low computational requirement compared with previous ones, flexibility for admixture models, and marker types. In particular, it allows for missing data from a contributing parental population. The method is applied to a human data set and a wolflike canids data set, and the results obtained are discussed in comparison with those from other estimators and from previous studies. PMID:12807794

  20. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  1. A maximum-likelihood estimation of pairwise relatedness for autopolyploids

    PubMed Central

    Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G

    2015-01-01

    Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210

  2. Parameter estimation in X-ray astronomy using maximum likelihood

    NASA Technical Reports Server (NTRS)

    Wachter, K.; Leach, R.; Kellogg, E.

    1979-01-01

    Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.

  3. Estimated likelihood of observing a large earthquake on a continental low-angle normal fault and implications for low-angle normal fault activity

    NASA Astrophysics Data System (ADS)

    Styron, Richard H.; Hetland, Eric A.

    2014-04-01

    The lack of observed continental earthquakes that clearly occurred on low-angle normal faults (LANFs) may indicate that these structures are not seismically active or that these earthquakes are simply rare events. To address this, we compile all potentially active continental LANFs (24 in total) and calculate the likelihood of observing a significant earthquake on them over periods of 1-100 years. This probability depends on several factors including the frequency-magnitude distribution. For either a characteristic or Gutenberg-Richter distribution, we calculate a probability of about 0.5 that an earthquake greater than M6.5 (large enough to avoid ambiguity in dip angle) will be observed on any LANF in a period of 35 years, which is the current length of the global centroid moment tensor catalog. We then use Bayes' Theorem to illustrate how the absence of observed significant LANF seismicity over the catalog period moderately decreases the likelihood that the structures generate large earthquakes.

  4. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    NASA Astrophysics Data System (ADS)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  5. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions.

    PubMed

    Park, Yongseok; Taylor, Jeremy M G; Kalbfleisch, John D

    2012-06-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method.

  6. Skewness for Maximum Likelihood Estimators of the Negative Binomial Distribution

    SciTech Connect

    Bowman, Kimiko o

    2007-01-01

    The probability generating function of one version of the negative binomial distribution being (p + 1 - pt){sup -k}, we study elements of the Hessian and in particular Fisher's discovery of a series form for the variance of k, the maximum likelihood estimator, and also for the determinant of the Hessian. There is a link with the Psi function and its derivatives. Basic algebra is excessively complicated and a Maple code implementation is an important task in the solution process. Low order maximum likelihood moments are given and also Fisher's examples relating to data associated with ticks on sheep. Efficiency of moment estimators is mentioned, including the concept of joint efficiency. In an Addendum we give an interesting formula for the difference of two Psi functions.

  7. Digital combining-weight estimation for broadband sources using maximum-likelihood estimates

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.; Vilnrotter, V. A.

    1994-01-01

    An algorithm described for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system is compared with the maximum-likelihood estimate. This provides some improvement in performance, with an increase in computational complexity. However, the maximum-likelihood algorithm is simple enough to allow implementation on a PC-based combining system.

  8. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  9. Targeted Maximum Likelihood Estimation for Causal Inference in Observational Studies.

    PubMed

    Schuler, Megan S; Rose, Sherri

    2017-01-01

    Estimation of causal effects using observational data continues to grow in popularity in the epidemiologic literature. While many applications of causal effect estimation use propensity score methods or G-computation, targeted maximum likelihood estimation (TMLE) is a well-established alternative method with desirable statistical properties. TMLE is a doubly robust maximum-likelihood-based approach that includes a secondary "targeting" step that optimizes the bias-variance tradeoff for the target parameter. Under standard causal assumptions, estimates can be interpreted as causal effects. Because TMLE has not been as widely implemented in epidemiologic research, we aim to provide an accessible presentation of TMLE for applied researchers. We give step-by-step instructions for using TMLE to estimate the average treatment effect in the context of an observational study. We discuss conceptual similarities and differences between TMLE and 2 common estimation approaches (G-computation and inverse probability weighting) and present findings on their relative performance using simulated data. Our simulation study compares methods under parametric regression misspecification; our results highlight TMLE's property of double robustness. Additionally, we discuss best practices for TMLE implementation, particularly the use of ensembled machine learning algorithms. Our simulation study demonstrates all methods using super learning, highlighting that incorporation of machine learning may outperform parametric regression in observational data settings.

  10. Evaluating maximum likelihood estimation methods to determine the hurst coefficients

    NASA Astrophysics Data System (ADS)

    Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.

    1999-12-01

    A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5< H<1, characterizes long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.

  11. Bayesian and maximum likelihood estimation of hierarchical response time models

    PubMed Central

    Farrell, Simon; Ludwig, Casimir

    2008-01-01

    Hierarchical (or multilevel) statistical models have become increasingly popular in psychology in the last few years. We consider the application of multilevel modeling to the ex-Gaussian, a popular model of response times. Single-level estimation is compared with hierarchical estimation of parameters of the ex-Gaussian distribution. Additionally, for each approach maximum likelihood (ML) estimation is compared with Bayesian estimation. A set of simulations and analyses of parameter recovery show that although all methods perform adequately well, hierarchical methods are better able to recover the parameters of the ex-Gaussian by reducing the variability in recovered parameters. At each level, little overall difference was observed between the ML and Bayesian methods. PMID:19001592

  12. Brain Anatomical Abnormalities in High-Risk Individuals, First-Episode, and Chronic Schizophrenia: An Activation Likelihood Estimation Meta-analysis of Illness Progression

    PubMed Central

    Chan, Raymond C. K.; Di, Xin; McAlonan, Grainne M.; Gong, Qi-yong

    2011-01-01

    Objective: The present study reviewed voxel-based morphometry (VBM) studies on high-risk individuals with schizophrenia, patients experiencing their first-episode schizophrenia (FES), and those with chronic schizophrenia. We predicted that gray matter abnormalities would show progressive changes, with most extensive abnormalities in the chronic group relative to FES and least in the high-risk group. Method: Forty-one VBM studies were reviewed. Eight high-risk studies, 14 FES studies, and 19 chronic studies were analyzed using anatomical likelihood estimation meta-analysis. Results: Less gray matter in the high-risk group relative to controls was observed in anterior cingulate regions, left amygdala, and right insula. Lower gray matter volumes in FES compared with controls were also found in the anterior cingulate and right insula but not the amygdala. Lower gray matter volumes in the chronic group were most extensive, incorporating similar regions to those found in FES and high-risk groups but extending to superior temporal gyri, thalamus, posterior cingulate, and parahippocampal gryus. Subtraction analysis revealed less frontotemporal, striatal, and cerebellar gray matter in FES than the high-risk group; the high-risk group had less gray matter in left subcallosal gyrus, left amygdala, and left inferior frontal gyrus compared with FES. Subtraction analysis confirmed lower gray matter volumes through ventral-dorsal anterior cingulate, right insula, left amygdala and thalamus in chronic schizophrenia relative to FES. Conclusions: Frontotemporal brain structural abnormalities are evident in nonpsychotic individuals at high risk of developing schizophrenia. The present meta-analysis indicates that these gray matter abnormalities become more extensive through first-episode and chronic illness. Thus, schizophrenia appears to be a progressive cortico-striato-thalamic loop disorder. PMID:19633214

  13. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  14. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  15. Maximum likelihood estimation in meta-analytic structural equation modeling.

    PubMed

    Oort, Frans J; Jak, Suzanne

    2016-06-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical properties is the two-stage structural equation modeling, in which maximum likelihood analysis is used to estimate the common correlation matrix in the first stage, and weighted least squares analysis is used to fit structural equation models to the common correlation matrix in the second stage. In the present paper, we propose an alternative method, ML MASEM, that uses ML estimation throughout. In a simulation study, we use both methods and compare chi-square distributions, bias in parameter estimates, false positive rates, and true positive rates. Both methods appear to yield unbiased parameter estimates and false and true positive rates that are close to the expected values. ML MASEM parameter estimates are found to be significantly less bias than two-stage structural equation modeling estimates, but the differences are very small. The choice between the two methods may therefore be based on other fundamental or practical arguments. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Maximum-likelihood estimation of recent shared ancestry (ERSA)

    PubMed Central

    Huff, Chad D.; Witherspoon, David J.; Simonson, Tatum S.; Xing, Jinchuan; Watkins, W. Scott; Zhang, Yuhua; Tuohy, Therese M.; Neklason, Deborah W.; Burt, Randall W.; Guthery, Stephen L.; Woodward, Scott R.; Jorde, Lynn B.

    2011-01-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package. PMID:21324875

  17. Pattern recognition using maximum likelihood estimation and orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Islam, M. M.; Alam, M. S.

    2006-08-01

    Hyperspectral sensor imagery (HSI) is a relatively new area of research, however, it is extensively being used in geology, agriculture, defense, intelligence and law enforcement applications. Much of the current research focuses on the object detection with low false alarm rate. Over the past several years, many object detection algorithms have been developed which include linear detector, quadratic detector, adaptive matched filter etc. In those methods the available data cube was directly used to determine the background mean and the covariance matrix, assuming that the number of object pixels is low compared to that of the data pixels. In this paper, we have used the orthogonal subspace projection (OSP) technique to find the background matrix from the given image data. Our algorithm consists of three parts. In the first part, we have calculated the background matrix using the OSP technique. In the second part, we have determined the maximum likelihood estimates of the parameters. Finally, we have considered the likelihood ratio, commonly known as the Neyman Pearson quadratic detector, to recognize the objects. The proposed technique has been investigated via computer simulation where excellent performance has been observed.

  18. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L.; DuFrain, R.J.

    1986-03-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  19. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  20. Maximum-likelihood estimation of haplotype frequencies in nuclear families.

    PubMed

    Becker, Tim; Knapp, Michael

    2004-07-01

    The importance of haplotype analysis in the context of association fine mapping of disease genes has grown steadily over the last years. Since experimental methods to determine haplotypes on a large scale are not available, phase has to be inferred statistically. For individual genotype data, several reconstruction techniques and many implementations of the expectation-maximization (EM) algorithm for haplotype frequency estimation exist. Recent research work has shown that incorporating available genotype information of related individuals largely increases the precision of haplotype frequency estimates. We, therefore, implemented a highly flexible program written in C, called FAMHAP, which calculates maximum likelihood estimates (MLEs) of haplotype frequencies from general nuclear families with an arbitrary number of children via the EM-algorithm for up to 20 SNPs. For more loci, we have implemented a locus-iterative mode of the EM-algorithm, which gives reliable approximations of the MLEs for up to 63 SNP loci, or less when multi-allelic markers are incorporated into the analysis. Missing genotypes can be handled as well. The program is able to distinguish cases (haplotypes transmitted to the first affected child of a family) from pseudo-controls (non-transmitted haplotypes with respect to the child). We tested the performance of FAMHAP and the accuracy of the obtained haplotype frequencies on a variety of simulated data sets. The implementation proved to work well when many markers were considered and no significant differences between the estimates obtained with the usual EM-algorithm and those obtained in its locus-iterative mode were observed. We conclude from the simulations that the accuracy of haplotype frequency estimation and reconstruction in nuclear families is very reliable in general and robust against missing genotypes.

  1. An Alternative Estimator for the Maximum Likelihood Estimator for the Two Extreme Response Patterns.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    In the methods and approaches developed for estimating the operating characteristics of the discrete item responses, the maximum likelihood estimate of the examinee based upon the "Old Test" has an important role. When Old Test does not provide a sufficient amount of test information for the upper and lower part of the ability interval,…

  2. A Method of Estimating Item Characteristic Functions Using the Maximum Likelihood Estimate of Ability

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1977-01-01

    A method of estimating item characteristic functions is proposed, in which a set of test items, whose operating characteristics are known and which give a constant test information function for a wide range of ability, are used. The method is based on maximum likelihood estimation procedures. (Author/JKS)

  3. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  4. The numerical evaluation of the maximum-likelihood estimate of a subset of mixture proportions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    Necessary and sufficient conditions are given for a maximum likelihood estimate of a subset of mixture proportions. From these conditions, likelihood equations are derived satisfied by the maximum-likelihood estimate and a successive-approximations procedure is discussed as suggested by equations for numerically evaluating the maximum-likelihood estimate. It is shown that, with probability one for large samples, this procedure converges locally to the maximum-likelihood estimate whenever a certain step-size lies between zero and two. Furthermore, optimal rates of local convergence are obtained for a step-size which is bounded below by a number between one and two.

  5. Comparison of induced rules based on likelihood estimation

    NASA Astrophysics Data System (ADS)

    Tsumoto, Shusaku

    2002-03-01

    Rule induction methods have been applied to knowledge discovery in databases and data mining, The empirical results obtained show that they are very powerful and that important knowledge has been extracted from datasets. However, comparison and evaluation of rules are based not on statistical evidence but on rather naive indices, such as conditional probabilities and functions of conditional probabilities. In this paper, we introduce two approaches to induced statistical comparison of induced rules. For the statistical evaluation, likelihood ratio test and Fisher's exact test play an important role: likelihood ratio statistic measures statistical information about an information table and it is used to measure the difference between two tables.

  6. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  7. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  8. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  9. Partial order optimum likelihood (POOL): maximum likelihood prediction of protein active site residues using 3D Structure and sequence properties.

    PubMed

    Tong, Wenxu; Wei, Ying; Murga, Leonel F; Ondrechen, Mary Jo; Williams, Ronald J

    2009-01-01

    A new monotonicity-constrained maximum likelihood approach, called Partial Order Optimum Likelihood (POOL), is presented and applied to the problem of functional site prediction in protein 3D structures, an important current challenge in genomics. The input consists of electrostatic and geometric properties derived from the 3D structure of the query protein alone. Sequence-based conservation information, where available, may also be incorporated. Electrostatics features from THEMATICS are combined with multidimensional isotonic regression to form maximum likelihood estimates of probabilities that specific residues belong to an active site. This allows likelihood ranking of all ionizable residues in a given protein based on THEMATICS features. The corresponding ROC curves and statistical significance tests demonstrate that this method outperforms prior THEMATICS-based methods, which in turn have been shown previously to outperform other 3D-structure-based methods for identifying active site residues. Then it is shown that the addition of one simple geometric property, the size rank of the cleft in which a given residue is contained, yields improved performance. Extension of the method to include predictions of non-ionizable residues is achieved through the introduction of environment variables. This extension results in even better performance than THEMATICS alone and constitutes to date the best functional site predictor based on 3D structure only, achieving nearly the same level of performance as methods that use both 3D structure and sequence alignment data. Finally, the method also easily incorporates such sequence alignment data, and when this information is included, the resulting method is shown to outperform the best current methods using any combination of sequence alignments and 3D structures. Included is an analysis demonstrating that when THEMATICS features, cleft size rank, and alignment-based conservation scores are used individually or in combination

  10. Using maximum likelihood to estimate population size from temporal changes in allele frequencies.

    PubMed Central

    Williamson, E G; Slatkin, M

    1999-01-01

    We develop a maximum-likelihood framework for using temporal changes in allele frequencies to estimate the number of breeding individuals in a population. We use simulations to compare the performance of this estimator to an F-statistic estimator of variance effective population size. The maximum-likelihood estimator had a lower variance and smaller bias. Taking advantage of the likelihood framework, we extend the model to include exponential growth and show that temporal allele frequency data from three or more sampling events can be used to test for population growth. PMID:10353915

  11. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  12. Building unbiased estimators from non-gaussian likelihoods with application to shear estimation

    SciTech Connect

    Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; Slosar, Anze

    2015-01-15

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g| = 0.2.

  13. Building unbiased estimators from non-gaussian likelihoods with application to shear estimation

    DOE PAGES

    Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; ...

    2015-01-15

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore » of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g| = 0.2.« less

  14. Using subsampling to estimate the strength of handwriting evidence via score-based likelihood ratios.

    PubMed

    Davis, Linda J; Saunders, Christopher P; Hepler, Amanda; Buscaglia, JoAnn

    2012-03-10

    The likelihood ratio paradigm has been studied as a means for quantifying the strength of evidence for a variety of forensic evidence types. Although the concept of a likelihood ratio as a comparison of the plausibility of evidence under two propositions (or hypotheses) is straightforward, a number of issues arise when one considers how to go about estimating a likelihood ratio. In this paper, we illustrate one possible approach to estimating a likelihood ratio in comparative handwriting analysis. The novelty of our proposed approach relies on generating simulated writing samples from a collection of writing samples from a known source to form a database for estimating the distribution associated with the numerator of a likelihood ratio. We illustrate this approach using documents collected from 432 writers under controlled conditions.

  15. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  16. Maximum Likelihood Estimation for Multiple Camera Target Tracking on Grassmann Tangent Subspace.

    PubMed

    Amini-Omam, Mojtaba; Torkamani-Azar, Farah; Ghorashi, Seyed Ali

    2016-11-15

    In this paper, we introduce a likelihood model for tracking the location of object in multiple view systems. Our proposed model transforms conventional nonlinear Euclidean estimation model to an estimation model based on the manifold tangent subspace. In this paper, we show that by decomposition of input noise into two parts and description of model by exponential map, real observations in the Euclidean geometry can be transformed to the manifold tangent subspace. Moreover, by obtained tangent subspace likelihood function, we propose two iterative and noniterative maximum likelihood estimation approaches which numerical results show their good performance.

  17. A New Maximum Likelihood Estimator for the Population Squared Multiple Correlation.

    ERIC Educational Resources Information Center

    Alf, Edward F., Jr.; Graf, Richard G.

    2002-01-01

    Developed a new estimator for the population squared multiple correlation using maximum likelihood estimation. Data from 72 air control school graduates demonstrate that the new estimator has greater accuracy than other estimators with values that fall within the parameter space. (SLD)

  18. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  19. Modified Maxium Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model

    DTIC Science & Technology

    2015-08-01

    MODIFIED MAXIMUM LIKELIHOOD ESTIMATION METHOD FOR COMPLETELY SEPARATED AND QUASI-COMPLETELY SEPARATED DATA...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT...quasi-completely separated , the traditional maximum likelihood estimation (MLE) method generates infinite estimates. The bias-reduction (BR) method

  20. Application of maximum-likelihood estimation in optical coherence tomography for nanometer-class thickness estimation

    NASA Astrophysics Data System (ADS)

    Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.

    2015-03-01

    In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.

  1. The gap between fatherhood and couplehood desires among Israeli gay men and estimations of their likelihood.

    PubMed

    Shenkman, Geva

    2012-10-01

    This study examined the frequencies of the desires and likelihood estimations of Israeli gay men regarding fatherhood and couplehood, using a sample of 183 gay men aged 19-50. It follows previous research which indicated the existence of a gap in the United States with respect to fatherhood, and called for generalizability examinations in other countries and the exploration of possible explanations. As predicted, a gap was also found in Israel between fatherhood desires and their likelihood estimations, as well as between couplehood desires and their likelihood estimations. In addition, lower estimations of fatherhood likelihood were found to predict depression and to correlate with decreased subjective well-being. Possible psychosocial explanations are offered. Moreover, by mapping attitudes toward fatherhood and couplehood among Israeli gay men, the current study helps to extend our knowledge of several central human development motivations and their correlations with depression and subjective well-being in a less-studied sexual minority in a complex cultural climate.

  2. Recent developments in maximum likelihood estimation of MTMM models for categorical data.

    PubMed

    Jeon, Minjeong; Rijmen, Frank

    2014-01-01

    Maximum likelihood (ML) estimation of categorical multitrait-multimethod (MTMM) data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution. The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization (e.g., Rijmen and Jeon, 2013), alternating imputation posterior (e.g., Cho and Rabe-Hesketh, 2011), and Monte Carlo local likelihood (e.g., Jeon et al., under revision). Each method is briefly described and its applicability for MTMM models with categorical data are discussed.

  3. Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling

    ERIC Educational Resources Information Center

    Oort, Frans J.; Jak, Suzanne

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…

  4. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  5. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  6. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  7. Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure

    ERIC Educational Resources Information Center

    Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M.

    2014-01-01

    Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…

  8. Estimating parameters of a multiple autoregressive model by the modified maximum likelihood method

    NASA Astrophysics Data System (ADS)

    Bayrak, Özlem Türker; Akkaya, Aysen D.

    2010-02-01

    We consider a multiple autoregressive model with non-normal error distributions, the latter being more prevalent in practice than the usually assumed normal distribution. Since the maximum likelihood equations have convergence problems (Puthenpura and Sinha, 1986) [11], we work out modified maximum likelihood equations by expressing the maximum likelihood equations in terms of ordered residuals and linearizing intractable nonlinear functions (Tiku and Suresh, 1992) [8]. The solutions, called modified maximum estimators, are explicit functions of sample observations and therefore easy to compute. They are under some very general regularity conditions asymptotically unbiased and efficient (Vaughan and Tiku, 2000) [4]. We show that for small sample sizes, they have negligible bias and are considerably more efficient than the traditional least squares estimators. We show that our estimators are robust to plausible deviations from an assumed distribution and are therefore enormously advantageous as compared to the least squares estimators. We give a real life example.

  9. Maximum-Likelihood Estimation and Scoring Under Parametric Constraints

    DTIC Science & Technology

    2006-05-01

    be a point θ∗ satisfying the following Karush-Kuhn-Tucker ( KKT ) necessary conditions (18, p.243): ∇TµL(θ∗,µ∗,ν∗) = f(θ∗) = 0 (20) g(θ∗) ≤ 0 (21) ν...constraint gi is active. Either ν ∗ i or gi(θ ∗) is zero, exclusively, so 4A commonly used alternative optimality condition for convex sets is discussed...Θ to be optimal in the convex set Θ provided ∇Tθ log p(x; θ∗)(θ∗)(θ − θ∗) ≤ 0 (A.1) for all θ ∈ Θ. The Lagrange method is optimal given the KKT

  10. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  11. A Joint Maximum Likelihood Estimation Procedure for the Hyperbolic Cosine Model for Single-Stimulus Responses.

    ERIC Educational Resources Information Center

    Luo, Guanzhong

    2000-01-01

    Extends joint maximum likelihood estimation for the hyperbolic cosine model to the situation in which the units of items are allowed to vary. Describes the four estimation cycles designed to address four important issues of model development and presents results from two sets of simulation studies that show reasonably accurate parameter recovery…

  12. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  13. Pseudo Maximum Likelihood Estimation and a Test for Misspecification in Mean and Covariance Structure Models.

    ERIC Educational Resources Information Center

    Arminger, Gerhard; Schoenberg, Ronald J.

    1989-01-01

    Misspecification of mean and covariance structures for metric endogenous variables is considered. Maximum likelihood estimation of model parameters and the asymptotic covariance matrix of the estimates are discussed. A Haussman test for misspecification is developed, which is sensitive to misspecification not detected by the test statistics…

  14. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  15. The Performance of the Full Information Maximum Likelihood Estimator in Multiple Regression Models with Missing Data.

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2001-01-01

    Examined the performance of a recently available full information maximum likelihood (FIML) estimator in a multiple regression model with missing data using Monte Carlo simulation and considering the effects of four independent variables. Results indicate that FIML estimation was superior to that of three ad hoc techniques, with less bias and less…

  16. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  17. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  18. Integration based profile likelihood calculation for PDE constrained parameter estimation problems

    NASA Astrophysics Data System (ADS)

    Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.

    2016-12-01

    Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.

  19. A conditional likelihood is required to estimate the selection coefficient in ancient DNA

    PubMed Central

    Valleriani, Angelo

    2016-01-01

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct. PMID:27527811

  20. A conditional likelihood is required to estimate the selection coefficient in ancient DNA

    NASA Astrophysics Data System (ADS)

    Valleriani, Angelo

    2016-08-01

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.

  1. The Bias Function of the Maximum Likelihood Estimate of Ability for the Dichotomous Response Level.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1993-01-01

    F. Samejima's approximation for the bias function for the maximum likelihood estimate of the latent trait in the general case where item responses are discrete is explored. Observations are made about the behavior of this bias function for the dichotomous response level in general. Empirical examples are given. (SLD)

  2. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  3. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  4. The Relative Performance of Full Information Maximum Likelihood Estimation for Missing Data in Structural Equation Models.

    ERIC Educational Resources Information Center

    Enders, Craig K.; Bandalos, Deborah L.

    2001-01-01

    Used Monte Carlo simulation to examine the performance of four missing data methods in structural equation models: (1)full information maximum likelihood (FIML); (2) listwise deletion; (3) pairwise deletion; and (4) similar response pattern imputation. Results show that FIML estimation is superior across all conditions of the design. (SLD)

  5. Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction

    ERIC Educational Resources Information Center

    Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.

    2009-01-01

    There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…

  6. On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.

    ERIC Educational Resources Information Center

    Fischer, Gerhard H.

    1981-01-01

    Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)

  7. Determination of lift and drag characteristics of Space Shuttle Orbiter using maximum likelihood estimation technique

    NASA Technical Reports Server (NTRS)

    Trujillo, B. M.

    1986-01-01

    This paper presents the technique and results of maximum likelihood estimation used to determine lift and drag characteristics of the Space Shuttle Orbiter. Maximum likelihood estimation uses measurable parameters to estimate nonmeasurable parameters. The nonmeasurable parameters for this case are elements of a nonlinear, dynamic model of the orbiter. The estimated parameters are used to evaluate a cost function that computes the differences between the measured and estimated longitudinal parameters. The case presented is a dynamic analysis. This places less restriction on pitching motion and can provide additional information about the orbiter such as lift and drag characteristics at conditions other than trim, instrument biases, and pitching moment characteristics. In addition, an output of the analysis is an estimate of the values for the individual components of lift and drag that contribute to the total lift and drag. The results show that maximum likelihood estimation is a useful tool for analysis of Space Shuttle Orbiter performance and is also applicable to parameter analysis of other types of aircraft.

  8. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  9. Marginal likelihood estimation of negative binomial parameters with applications to RNA-seq data.

    PubMed

    León-Novelo, Luis; Fuentes, Claudio; Emerson, Sarah

    2017-03-19

    RNA-Seq data characteristically exhibits large variances, which need to be appropriately accounted for in any proposed model. We first explore the effects of this variability on the maximum likelihood estimator (MLE) of the dispersion parameter of the negative binomial distribution, and propose instead to use an estimator obtained via maximization of the marginal likelihood in a conjugate Bayesian framework. We show, via simulation studies, that the marginal MLE can better control this variation and produce a more stable and reliable estimator. We then formulate a conjugate Bayesian hierarchical model, and use this new estimator to propose a Bayesian hypothesis test to detect differentially expressed genes in RNA-Seq data. We use numerical studies to show that our much simpler approach is competitive with other negative binomial based procedures, and we use a real data set to illustrate the implementation and flexibility of the procedure.

  10. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks

    PubMed Central

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-01-01

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173

  11. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  12. Abundance estimation from multiple photo surveys: confidence distributions and reduced likelihoods for bowhead whales off Alaska.

    PubMed

    Schweder, Tore

    2003-12-01

    Maximum likelihood estimates of abundance are obtained from repeated photographic surveys of a closed stratified population with naturally marked and unmarked individuals. Capture intensities are assumed log-linear in stratum, year, and season. In the chosen model, an approximate confidence distribution for total abundance of bowhead whales, with an accompanying likelihood reduced of nuisance parameters, is found from a parametric bootstrap experiment. The confidence distribution depends on the assumed study protocol. A confidence distribution that is exact (except for the effect of discreteness) is found by conditioning in the unstratified case without unmarked individuals.

  13. Kernel Smoothed Profile Likelihood Estimation in the Accelerated Failure Time Frailty Model for Clustered Survival Data

    PubMed Central

    Liu, Bo; Lu, Wenbin; Zhang, Jiajia

    2013-01-01

    Summary Clustered survival data frequently arise in biomedical applications, where event times of interest are clustered into groups such as families. In this article we consider an accelerated failure time frailty model for clustered survival data and develop nonparametric maximum likelihood estimation for it via a kernel smoother aided EM algorithm. We show that the proposed estimator for the regression coefficients is consistent, asymptotically normal and semiparametric efficient when the kernel bandwidth is properly chosen. An EM-aided numerical differentiation method is derived for estimating its variance. Simulation studies evaluate the finite sample performance of the estimator, and it is applied to the Diabetic Retinopathy data set. PMID:24443587

  14. Maximum-likelihood joint image reconstruction and motion estimation with misaligned attenuation in TOF-PET/CT

    NASA Astrophysics Data System (ADS)

    Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F.; Thielemans, Kris

    2016-02-01

    This work is an extension of our recent work on joint activity reconstruction/motion estimation (JRM) from positron emission tomography (PET) data. We performed JRM by maximization of the penalized log-likelihood in which the probabilistic model assumes that the same motion field affects both the activity distribution and the attenuation map. Our previous results showed that JRM can successfully reconstruct the activity distribution when the attenuation map is misaligned with the PET data, but converges slowly due to the significant cross-talk in the likelihood. In this paper, we utilize time-of-flight PET for JRM and demonstrate that the convergence speed is significantly improved compared to JRM with conventional PET data.

  15. A sampling approach to estimate the log determinant used in spatial likelihood problems

    NASA Astrophysics Data System (ADS)

    Pace, R. Kelley; Lesage, James P.

    2009-09-01

    Likelihood-based methods for modeling multivariate Gaussian spatial data have desirable statistical characteristics, but the practicality of these methods for massive georeferenced data sets is often questioned. A sampling algorithm is proposed that exploits a relationship involving log-pivots arising from matrix decompositions used to compute the log determinant term that appears in the model likelihood. We demonstrate that the method can be used to successfully estimate log-determinants for large numbers of observations. Specifically, we produce an log-determinant estimate for a 3,954,400 by 3,954,400 matrix in less than two minutes on a desktop computer. The proposed method involves computations that are independent, making it amenable to out-of-core computation as well as to coarse-grained parallel or distributed processing. The proposed technique yields an estimated log-determinant and associated confidence interval.

  16. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.

    PubMed

    Meyer, Karin

    2016-08-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.

  17. Estimating sampling error of evolutionary statistics based on genetic covariance matrices using maximum likelihood.

    PubMed

    Houle, D; Meyer, K

    2015-08-01

    We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest.

  18. Maximum Likelihood DOA Estimation of Multiple Wideband Sources in the Presence of Nonuniform Sensor Noise

    NASA Astrophysics Data System (ADS)

    Chen, C. E.; Lorenzelli, F.; Hudson, R. E.; Yao, K.

    2007-12-01

    We investigate the maximum likelihood (ML) direction-of-arrival (DOA) estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation Cramér-Rao-Bound (CRB) has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML) requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML) attain a solution close to the derived CRB at high signal-to-noise ratio.

  19. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    NASA Astrophysics Data System (ADS)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  20. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  1. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  2. Sampling variability and estimates of density dependence: a composite-likelihood approach.

    PubMed

    Lele, Subhash R

    2006-01-01

    It is well known that sampling variability, if not properly taken into account, affects various ecologically important analyses. Statistical inference for stochastic population dynamics models is difficult when, in addition to the process error, there is also sampling error. The standard maximum-likelihood approach suffers from large computational burden. In this paper, I discuss an application of the composite-likelihood method for estimation of the parameters of the Gompertz model in the presence of sampling variability. The main advantage of the method of composite likelihood is that it reduces the computational burden substantially with little loss of statistical efficiency. Missing observations are a common problem with many ecological time series. The method of composite likelihood can accommodate missing observations in a straightforward fashion. Environmental conditions also affect the parameters of stochastic population dynamics models. This method is shown to handle such nonstationary population dynamics processes as well. Many ecological time series are short, and statistical inferences based on such short time series tend to be less precise. However, spatial replications of short time series provide an opportunity to increase the effective sample size. Application of likelihood-based methods for spatial time-series data for population dynamics models is computationally prohibitive. The method of composite likelihood is shown to have significantly less computational burden, making it possible to analyze large spatial time-series data. After discussing the methodology in general terms, I illustrate its use by analyzing a time series of counts of American Redstart (Setophaga ruticilla) from the Breeding Bird Survey data, San Joaquin kit fox (Vulpes macrotis mutica) population abundance data, and spatial time series of Bull trout (Salvelinus confluentus) redds count data.

  3. Predicting bulk permeability using outcrop fracture attributes: The benefits of a Maximum Likelihood Estimator

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; De Siena, L.

    2015-12-01

    The success of any model prediction is largely dependent on the accuracy with which its parameters are known. In characterising fracture networks in naturally fractured rocks, the main issues are related with the difficulties in accurately up- and down-scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (fracture lengths, apertures, orientations and densities) represents a fundamental step which can aid the estimation of permeability and fluid flow, which are of primary importance in a number of contexts ranging from hydrocarbon production in fractured reservoirs and reservoir stimulation by hydrofracturing, to geothermal energy extraction and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. This work focuses on linking fracture data collected directly from outcrops to permeability estimation and fracture network modelling. Outcrop studies can supplement the limited data inherent to natural fractured systems in the subsurface. The study area is a highly fractured upper Miocene biosiliceous mudstone formation cropping out along the coastline north of Santa Cruz (California, USA). These unique outcrops exposes a recently active bitumen-bearing formation representing a geological analogue of a fractured top seal. In order to validate field observations as useful analogues of subsurface reservoirs, we describe a methodology of statistical analysis for more accurate probability distribution of fracture attributes, using Maximum Likelihood Estimators. These procedures aim to understand whether the average permeability of a fracture network can be predicted reducing its uncertainties, and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models.

  4. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-01

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  5. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  6. Estimating natural monthly streamflows in California and the likelihood of anthropogenic modification

    USGS Publications Warehouse

    Carlisle, Daren M.; Wolock, David M.; Howard, Jeannette K.; Grantham, Theodore E.; Fesenmyer, Kurt; Wieczorek, Michael

    2016-12-12

    Because natural patterns of streamflow are a fundamental property of the health of streams, there is a critical need to quantify the degree to which human activities have modified natural streamflows. A requirement for assessing streamflow modification in a given stream is a reliable estimate of flows expected in the absence of human influences. Although there are many techniques to predict streamflows in specific river basins, there is a lack of approaches for making predictions of natural conditions across large regions and over many decades. In this study conducted by the U.S. Geological Survey, in cooperation with The Nature Conservancy and Trout Unlimited, the primary objective was to develop empirical models that predict natural (that is, unaffected by land use or water management) monthly streamflows from 1950 to 2012 for all stream segments in California. Models were developed using measured streamflow data from the existing network of streams where daily flow monitoring occurs, but where the drainage basins have minimal human influences. Widely available data on monthly weather conditions and the physical attributes of river basins were used as predictor variables. Performance of regional-scale models was comparable to that of published mechanistic models for specific river basins, indicating the models can be reliably used to estimate natural monthly flows in most California streams. A second objective was to develop a model that predicts the likelihood that streams experience modified hydrology. New models were developed to predict modified streamflows at 558 streamflow monitoring sites in California where human activities affect the hydrology, using basin-scale geospatial indicators of land use and water management. Performance of these models was less reliable than that for the natural-flow models, but results indicate the models could be used to provide a simple screening tool for identifying, across the State of California, which streams may be

  7. Experimental demonstration of the maximum likelihood-based chromatic dispersion estimator for coherent receivers

    NASA Astrophysics Data System (ADS)

    Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk; Arlunno, Valeria; Caballero, Antonio; Zibar, Darko; Tafur Monroy, Idelfonso

    2014-03-01

    We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR) and against differential group delay (DGD) in an experiment involving 112 Gbit/s polarization-division multiplexed (PDM) 16-ary quadrature amplitude modulation (16 QAM) and quaternary phase-shift keying (QPSK).

  8. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    PubMed Central

    Cai, Shu; Zhou, Quan; Zhu, Hongbo

    2016-01-01

    Direction of arrival (DOA) estimation using a uniform linear array (ULA) is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML) criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS), and then solve it using semidefinite programming (SDP). We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR) is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion. PMID:27999397

  9. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation.

    PubMed

    Cai, Shu; Zhou, Quan; Zhu, Hongbo

    2016-12-20

    Direction of arrival (DOA) estimation using a uniform linear array (ULA) is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML) criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS), and then solve it using semidefinite programming (SDP). We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR) is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  10. Cosmic Microwave Background Likelihood Approximation by a Gaussianized Blackwell-Rao Estimator

    NASA Astrophysics Data System (ADS)

    Rudjord, Ø.; Groeneboom, N. E.; Eriksen, H. K.; Huey, Greg; Górski, K. M.; Jewell, J. B.

    2009-02-01

    We introduce a new cosmic microwave background (CMB) temperature likelihood approximation called the Gaussianized Blackwell-Rao estimator. This estimator is derived by transforming the observed marginal power spectrum distributions obtained by the CMB Gibbs sampler into standard univariate Gaussians, and then approximating their joint transformed distribution by a multivariate Gaussian. The method is exact for full-sky coverage and uniform noise and an excellent approximation for sky cuts and scanning patterns relevant for modern satellite experiments such as the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck. The result is a stable, accurate, and computationally very efficient CMB temperature likelihood representation that allows the user to exploit the unique error propagation capabilities of the Gibbs sampler to high ells. A single evaluation of this estimator between ell = 2 and 200 takes ~0.2 CPU milliseconds, while for comparison, a singe pixel space likelihood evaluation between ell = 2 and 30 for a map with ~2500 pixels requires ~20 s. We apply this tool to the five-year WMAP temperature data, and re-estimate the angular temperature power spectrum, C ell, and likelihood, L(C_{ℓ}), for ell <= 200, and derive new cosmological parameters for the standard six-parameter ΛCDM model. Our spectrum is in excellent agreement with the official WMAP spectrum, but we find slight differences in the derived cosmological parameters. Most importantly, the spectral index of scalar perturbations is ns = 0.973 ± 0.014, 1.9σ away from unity and 0.6σ higher than the official WMAP result, ns = 0.965 ± 0.014. This suggests that an exact likelihood treatment is required to higher ells than previously believed, reinforcing and extending our conclusions from the three-year WMAP analysis. In that case, we found that the suboptimal likelihood approximation adopted between ell = 12 and 30 by the WMAP team biased ns low by 0.4σ, while here we find that the same

  11. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  12. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  13. Maximum likelihood estimation of the mixture of log-concave densities.

    PubMed

    Hu, Hao; Wu, Yichao; Yao, Weixin

    2016-09-01

    Finite mixture models are useful tools and can be estimated via the EM algorithm. A main drawback is the strong parametric assumption about the component densities. In this paper, a much more flexible mixture model is considered, which assumes each component density to be log-concave. Under fairly general conditions, the log-concave maximum likelihood estimator (LCMLE) exists and is consistent. Numeric examples are also made to demonstrate that the LCMLE improves the clustering results while comparing with the traditional MLE for parametric mixture models.

  14. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  15. Maximum-likelihood Estimation of Planetary Lithospheric Rigidity from Gravity and Topography

    NASA Astrophysics Data System (ADS)

    Lewis, K. W.; Eggers, G. L.; Simons, F. J.; Olhede, S. C.

    2014-12-01

    Gravity and surface topography remain among the best available tools with which to study the lithospheric structure of planetary bodies. Numerous techniques have been developed to quantify the relationship between these fields in both the spatial and spectral domains, to constrain geophysical parameters of interest. Simons and Olhede (2013) describe a new technique based on maximum-likelihood estimation of lithospheric parameters including flexural rigidity, subsurface-surface loading ratio, and the correlation of these loads. We report on the first applications of this technique to planetary bodies including Venus, Mars, and the Earth. We compare results using the maximum-likelihood technique to previous studies using admittance and coherence-based techniques. While various methods of evaluating the relationship of gravity and topography fields have distinct advantages, we demonstrate the specific benefits of the Simons and Olhede technique, which yields unbiased, minimum variance estimates of parameters, together with their covariance. Given the unavoidable problems of incompletely sensed gravity fields, spectral artifacts of data interpolation, downward continuation, and spatial localization, we prescribe a recipe for application of this method to real-world data sets. In the specific case of Venus, we discuss the results of global mapped inversion of an isotropic Matérn covariance model of its topography. We interpret and identify, via statistical testing, regions that require abandoning the null-hypothesis of isotropic Gaussianity, an assumption of the maximum-likelihood technique.

  16. Additive hazards regression and partial likelihood estimation for ecological monitoring data across space.

    PubMed

    Lin, Feng-Chang; Zhu, Jun

    2012-01-01

    We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.

  17. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  18. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  19. Maximum-likelihood estimation in Optical Coherence Tomography in the context of the tear film dynamics.

    PubMed

    Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Lee, Kye-Sung; Maki, Kara L; Ross, David S; Aquavella, James V; Rolland, Jannick P

    2013-01-01

    Understanding tear film dynamics is a prerequisite for advancing the management of Dry Eye Disease (DED). In this paper, we discuss the use of optical coherence tomography (OCT) and statistical decision theory to analyze the tear film dynamics of a digital phantom. We implement a maximum-likelihood (ML) estimator to interpret OCT data based on mathematical models of Fourier-Domain OCT and the tear film. With the methodology of task-based assessment, we quantify the tradeoffs among key imaging system parameters. We find, on the assumption that the broadband light source is characterized by circular Gaussian statistics, ML estimates of 40 nm +/- 4 nm for an axial resolution of 1 μm and an integration time of 5 μs. Finally, the estimator is validated with a digital phantom of tear film dynamics, which reveals estimates of nanometer precision.

  20. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  1. Simultaneous Multiple Response Regression and Inverse Covariance Matrix Estimation via Penalized Gaussian Maximum Likelihood.

    PubMed

    Lee, Wonyul; Liu, Yufeng

    2012-10-01

    Multivariate regression is a common statistical tool for practical problems. Many multivariate regression techniques are designed for univariate response cases. For problems with multiple response variables available, one common approach is to apply the univariate response regression technique separately on each response variable. Although it is simple and popular, the univariate response approach ignores the joint information among response variables. In this paper, we propose three new methods for utilizing joint information among response variables. All methods are in a penalized likelihood framework with weighted L(1) regularization. The proposed methods provide sparse estimators of conditional inverse co-variance matrix of response vector given explanatory variables as well as sparse estimators of regression parameters. Our first approach is to estimate the regression coefficients with plug-in estimated inverse covariance matrices, and our second approach is to estimate the inverse covariance matrix with plug-in estimated regression parameters. Our third approach is to estimate both simultaneously. Asymptotic properties of these methods are explored. Our numerical examples demonstrate that the proposed methods perform competitively in terms of prediction, variable selection, as well as inverse covariance matrix estimation.

  2. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Zeng, Donglin; Mao, Lu; Lin, D. Y.

    2016-01-01

    Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  3. Maximum Likelihood Estimation of the Broken Power Law Spectral Parameters with Detector Design Applications

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The maximum likelihood procedure is developed for estimating the three spectral parameters of an assumed broken power law energy spectrum from simulated detector responses and their statistical properties investigated. The estimation procedure is then generalized for application to real cosmic-ray data. To illustrate the procedure and its utility, analytical methods were developed in conjunction with a Monte Carlo simulation to explore the combination of the expected cosmic-ray environment with a generic space-based detector and its planned life cycle, allowing us to explore various detector features and their subsequent influence on estimating the spectral parameters. This study permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.

  4. An algorithm for maximum likelihood estimation using an efficient method for approximating sensitivities

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1984-01-01

    An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.

  5. Full Information Maximum Likelihood Estimation for Latent Variable Interactions With Incomplete Indicators.

    PubMed

    Cham, Heining; Reshetnyak, Evgeniya; Rosenfeld, Barry; Breitbart, William

    2017-01-01

    Researchers have developed missing data handling techniques for estimating interaction effects in multiple regression. Extending to latent variable interactions, we investigated full information maximum likelihood (FIML) estimation to handle incompletely observed indicators for product indicator (PI) and latent moderated structural equations (LMS) methods. Drawing on the analytic work on missing data handling techniques in multiple regression with interaction effects, we compared the performance of FIML for PI and LMS analytically. We performed a simulation study to compare FIML for PI and LMS. We recommend using FIML for LMS when the indicators are missing completely at random (MCAR) or missing at random (MAR) and when they are normally distributed. FIML for LMS produces unbiased parameter estimates with small variances, correct Type I error rates, and high statistical power of interaction effects. We illustrated the use of these methods by analyzing the interaction effect between advanced cancer patients' depression and change of inner peace well-being on future hopelessness levels.

  6. On the use of maximum likelihood estimation for the assembly of Space Station Freedom

    NASA Astrophysics Data System (ADS)

    Taylor, Lawrence W., Jr.; Ramakrishnan, Jayant

    Distributed parameter models of the Solar Array Flight Experiment, the Mini-MAST truss, and Space Station Freedom assembly are discussed. The distributed parameter approach takes advantage of (1) the relatively small number of model parameters associated with partial differential equation models of structural dynamics, (2) maximum-likelihood estimation using both prelaunch and on-orbit test data, (3) the inclusion of control system dynamics in the same equations, and (4) the incremental growth of the structural configurations. Maximum-likelihood parameter estimates for distributed parameter models were based on static compliance test results and frequency response measurements. Because the Space Station Freedom does not yet exist, the NASA Mini-MAST truss was used to test the procedure of modeling and parameter estimation. The resulting distributed parameter model of the Mini-MAST truss successfully demonstrated the approach taken. The computer program PDEMOD enables any configuration that can be represented by a network of flexible beam elements and rigid bodies to be remodeled.

  7. On the use of maximum likelihood estimation for the assembly of Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.; Ramakrishnan, Jayant

    1991-01-01

    Distributed parameter models of the Solar Array Flight Experiment, the Mini-MAST truss, and Space Station Freedom assembly are discussed. The distributed parameter approach takes advantage of (1) the relatively small number of model parameters associated with partial differential equation models of structural dynamics, (2) maximum-likelihood estimation using both prelaunch and on-orbit test data, (3) the inclusion of control system dynamics in the same equations, and (4) the incremental growth of the structural configurations. Maximum-likelihood parameter estimates for distributed parameter models were based on static compliance test results and frequency response measurements. Because the Space Station Freedom does not yet exist, the NASA Mini-MAST truss was used to test the procedure of modeling and parameter estimation. The resulting distributed parameter model of the Mini-MAST truss successfully demonstrated the approach taken. The computer program PDEMOD enables any configuration that can be represented by a network of flexible beam elements and rigid bodies to be remodeled.

  8. Benefits of maximum likelihood estimators for fracture attribute analysis: Implications for permeability and up-scaling

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; De Siena, L.

    2017-02-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in rocks, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture lengths and apertures are fundamental to estimate bulk permeability and therefore fluid flow, especially for rocks with low primary porosity where most of the flow takes place within fractures. We collected outcrop data from a fractured upper Miocene biosiliceous mudstone formation (California, USA), which exhibits seepage of bitumen-rich fluids through the fractures. The dataset was analysed using Maximum Likelihood Estimators to extract the underlying scaling parameters, and we found a log-normal distribution to be the best representative statistic for both fracture lengths and apertures in the study area. By applying Maximum Likelihood Estimators on outcrop fracture data, we generate fracture network models with the same statistical attributes to the ones observed on outcrop, from which we can achieve more robust predictions of bulk permeability.

  9. Carrier Recovery Enhancement for Maximum-Likelihood Doppler Shift Estimation in Mars Exploration Missions

    NASA Astrophysics Data System (ADS)

    Cattivelli, Federico S.; Estabrook, Polly; Satorius, Edgar H.; Sayed, Ali H.

    2008-11-01

    One of the most crucial stages of the Mars exploration missions is the entry, descent, and landing (EDL) phase. During EDL, maintaining reliable communication from the spacecraft to Earth is extremely important for the success of future missions, especially in case of mission failure. EDL is characterized by very deep accelerations, caused by friction, parachute deployment and rocket firing among others. These dynamics cause a severe Doppler shift on the carrier communications link to Earth. Methods have been proposed to estimate the Doppler shift based on Maximum Likelihood. So far these methods have proved successful, but it is expected that the next Mars mission, known as the Mars Science Laboratory, will suffer from higher dynamics and lower SNR. Thus, improving the existing estimation methods becomes a necessity. We propose a Maximum Likelihood approach that takes into account the power in the data tones to enhance carrier recovery, and improve the estimation performance by up to 3 dB. Simulations are performed using real data obtained during the EDL stage of the Mars Exploration Rover B (MERB) mission.

  10. Equalization of nonlinear transmission impairments by maximum-likelihood-sequence estimation in digital coherent receivers.

    PubMed

    Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro

    2010-03-01

    We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.

  11. Multifrequency InSAR height reconstruction through maximum likelihood estimation of local planes parameters.

    PubMed

    Pascazio, Vito; Schirinzi, Gilda

    2002-01-01

    In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.

  12. Phase Noise Investigation of Maximum Likelihood Estimation Method for Airborne Multibaseline SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Magnard, C.; Small, D.; Meier, E.

    2015-03-01

    The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the intermediate baselines to unwrap the phase values from the longest baseline. The phase noise was analyzed for both methods: in most cases, a small improvement was found when the ML method was used.

  13. Maximum-likelihood estimation optimizer for constrained, time-optimal satellite reorientation

    NASA Astrophysics Data System (ADS)

    Melton, Robert G.

    2014-10-01

    The Covariance Matrix Adaptation-Evolutionary Strategy (CMA-ES) method provides a high-quality estimate of the control solution for an unconstrained satellite reorientation problem, and rapid, useful guesses needed for high-fidelity methods that can solve time-optimal reorientation problems with multiple path constraints. The CMA-ES algorithm offers two significant advantages over heuristic methods such as Particle Swarm or Bacteria Foraging Optimisation: it builds an approximation to the covariance matrix for the cost function, and uses that to determine a direction of maximum likelihood for the search, reducing the chance of stagnation; and it achieves second-order, quasi-Newton convergence behaviour.

  14. An inconsistency in the standard maximum likelihood estimation of bulk flows

    SciTech Connect

    Nusser, Adi

    2014-11-01

    Maximum likelihood estimation of the bulk flow from radial peculiar motions of galaxies generally assumes a constant velocity field inside the survey volume. This assumption is inconsistent with the definition of bulk flow as the average of the peculiar velocity field over the relevant volume. This follows from a straightforward mathematical relation between the bulk flow of a sphere and the velocity potential on its surface. This inconsistency also exists for ideal data with exact radial velocities and full spatial coverage. Based on the same relation, we propose a simple modification to correct for this inconsistency.

  15. Maximum Likelihood Event Estimation and List-mode Image Reconstruction on GPU Hardware

    PubMed Central

    Caucci, Luca; Furenlid, Lars R.; Barrett, Harrison H.

    2010-01-01

    The scintillation detectors commonly used in SPECT and PET imaging and in Compton cameras require estimation of the position and energy of each gamma ray interaction. Ideally, this process would yield images with no spatial distortion and the best possible spatial resolution. In addition, especially for Compton cameras, the computation must yield the best possible estimate of the energy of each interacting gamma ray. These goals can be achieved by use of maximum-likelihood (ML) estimation of the event parameters, but in the past the search for an ML estimate has not been computationally feasible. Now, however, graphics processing units (GPUs) make it possible to produce optimal, real-time estimates of position and energy, even from scintillation cameras with a large number of photodetectors. In addition, the mathematical properties of ML estimates make them very attractive for use as list entries in list-mode ML image reconstruction. This two-step ML process—using ML estimation once to get the list data and again to reconstruct the object—allows accurate modeling of the detector blur and, potentially, considerable improvement in reconstructed spatial resolution. PMID:21278803

  16. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  17. Maximum likelihood estimators for truncated and censored power-law distributions show how neuronal avalanches may be misevaluated

    NASA Astrophysics Data System (ADS)

    Langlois, Dominic; Cousineau, Denis; Thivierge, J. P.

    2014-01-01

    The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.

  18. Maximum likelihood estimators for truncated and censored power-law distributions show how neuronal avalanches may be misevaluated.

    PubMed

    Langlois, Dominic; Cousineau, Denis; Thivierge, J P

    2014-01-01

    The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.

  19. A maximum likelihood approach to estimating articulator positions from speech acoustics

    SciTech Connect

    Hogden, J.

    1996-09-23

    This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.

  20. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  1. Gutenberg-Richter b-value maximum likelihood estimation and sample size

    NASA Astrophysics Data System (ADS)

    Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.

    2017-01-01

    The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.

  2. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    NASA Technical Reports Server (NTRS)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  3. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W., Jr.

    2003-01-01

    A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.

  4. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; Simonson, Katherine Mary

    2016-01-11

    In past research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimator is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.

  5. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  6. Parallel computation of a maximum-likelihood estimator of a physical map.

    PubMed Central

    Bhandarkar, S M; Machaka, S A; Shete, S S; Kota, R N

    2001-01-01

    Reconstructing a physical map of a chromosome from a genomic library presents a central computational problem in genetics. Physical map reconstruction in the presence of errors is a problem of high computational complexity that provides the motivation for parallel computing. Parallelization strategies for a maximum-likelihood estimation-based approach to physical map reconstruction are presented. The estimation procedure entails a gradient descent search for determining the optimal spacings between probes for a given probe ordering. The optimal probe ordering is determined using a stochastic optimization algorithm such as simulated annealing or microcanonical annealing. A two-level parallelization strategy is proposed wherein the gradient descent search is parallelized at the lower level and the stochastic optimization algorithm is simultaneously parallelized at the higher level. Implementation and experimental results on a distributed-memory multiprocessor cluster running the parallel virtual machine (PVM) environment are presented using simulated and real hybridization data. PMID:11238392

  7. Performance of default risk model with barrier option framework and maximum likelihood estimation: Evidence from Taiwan

    NASA Astrophysics Data System (ADS)

    Chou, Heng-Chih; Wang, David

    2007-11-01

    We investigate the performance of a default risk model based on the barrier option framework with maximum likelihood estimation. We provide empirical validation of the model by showing that implied default barriers are statistically significant for a sample of construction firms in Taiwan over the period 1994-2004. We find that our model dominates the commonly adopted models, Merton model, Z-score model and ZETA model. Moreover, we test the n-year-ahead prediction performance of the model and find evidence that the prediction accuracy of the model improves as the forecast horizon decreases. Finally, we assess the effect of estimated default risk on equity returns and find that default risk is able to explain equity returns and that default risk is a variable worth considering in asset-pricing tests, above and beyond size and book-to-market.

  8. Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Anissipour, Amir A.; Benson, Russell A.

    1989-01-01

    The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.

  9. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  10. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.

  11. An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator

    PubMed Central

    Galili, Tal; Meilijson, Isaac

    2016-01-01

    The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.] PMID:27499547

  12. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    PubMed

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  13. Sparse array 3-D ISAR imaging based on maximum likelihood estimation and CLEAN technique.

    PubMed

    Ma, Changzheng; Yeo, Tat Soon; Tan, Chee Seng; Tan, Hwee Siang

    2010-08-01

    Large 2-D sparse array provides high angular resolution microwave images but artifacts are also induced by the high sidelobes of the beam pattern, thus, limiting its dynamic range. CLEAN technique has been used in the literature to extract strong scatterers for use in subsequent signal cancelation (artifacts removal). However, the performance of DFT parameters estimation based CLEAN algorithm for the estimation of the signal amplitudes is known to be poor, and this affects the signal cancelation. In this paper, DFT is used only to provide the initial estimates, and the maximum likelihood parameters estimation method with steepest descent implementation is then used to improve the precision of the calculated scatterers positions and amplitudes. Time domain information is also used to reduce the sidelobe levels. As a result, clear, artifact-free images could be obtained. The effects of multiple reflections and rotation speed estimation error are also discussed. The proposed method has been verified using numerical simulations and it has been shown to be effective.

  14. Binomial distribution sample confidence intervals estimation for positive and negative likelihood ratio medical key parameters.

    PubMed

    Bolboacă, Sorana; Jäntschi, Lorentz

    2005-01-01

    Likelihood Ratio medical key parameters calculated on categorical results from diagnostic tests are usually express accompanied with their confidence intervals, computed using the normal distribution approximation of binomial distribution. The approximation creates known anomalies,especially for limit cases. In order to improve the quality of estimation, four new methods (called here RPAC, RPAC0, RPAC1, and RPAC2) were developed and compared with the classical method (called here RPWald), using an exact probability calculation algorithm.Computer implementations of the methods use the PHP language. We defined and implemented the functions of the four new methods and the five criterions of confidence interval assessment. The experiments run for samples sizes which vary in 14 - 34 range, 90 - 100 range (0 < X < m, 0< Y < n), as well as for random numbers for samples sizes (4m, n likelihood ratios.

  15. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Saatci, Esra; Akan, Aydin

    2010-12-01

    We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  16. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2002-01-01

    A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from

  17. Step change point estimation in the multivariate-attribute process variability using artificial neural networks and maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Maleki, Mohammad Reza; Amiri, Amirhossein; Mousavi, Seyed Meysam

    2015-07-01

    In some statistical process control applications, the combination of both variable and attribute quality characteristics which are correlated represents the quality of the product or the process. In such processes, identification the time of manifesting the out-of-control states can help the quality engineers to eliminate the assignable causes through proper corrective actions. In this paper, first we use an artificial neural network (ANN)-based method in the literature for detecting the variance shifts as well as diagnosing the sources of variation in the multivariate-attribute processes. Then, based on the quality characteristics responsible for the out-of-control state, we propose a modular model based on the ANN for estimating the time of step change in the multivariate-attribute process variability. We also compare the performance of the ANN-based estimator with the estimator based on maximum likelihood method (MLE). A numerical example based on simulation study is used to evaluate the performance of the estimators in terms of the accuracy and precision criteria. The results of the simulation study show that the proposed ANN-based estimator outperforms the MLE estimator under different out-of-control scenarios where different shift magnitudes in the covariance matrix of multivariate-attribute quality characteristics are manifested.

  18. Nonparametric Maximum Penalized Likelihood Estimation of a Density from Arbitrarily Right-Censored Observations.

    DTIC Science & Technology

    1984-10-01

    and second Fr~ chet derivatives of 3(v) 1 n L(v) are given by (Tapia, 1971) n d I n(xi) n (1- d )I X I iv MxZ) ’-t)dt -2 < v,rn and n d 2. (X-n j~i (v)i...8217 first APLE; SurvivaZ e.t..a- tion; Random censor-hip; Nonparaet"c density estimation; Reliability. AB STRACT D Based on arbitrarily right-censored...functional 0: H(n) -I R. Given the arbitrarily right-censored sample (xi,dt), i11,2,... ,n, the #-penalized likelihood of v c H(n) is defined by %I n d

  19. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    DOE PAGES

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; ...

    2016-01-11

    In past research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less

  20. Confidence Interval Estimation for Sensitivity to the Early Diseased Stage Based on Empirical Likelihood.

    PubMed

    Dong, Tuochuan; Tian, Lili

    2015-01-01

    Many disease processes can be divided into three stages: the non-diseased stage: the early diseased stage, and the fully diseased stage. To assess the accuracy of diagnostic tests for such diseases, various summary indexes have been proposed, such as volume under the surface (VUS), partial volume under the surface (PVUS), and the sensitivity to the early diseased stage given specificity and the sensitivity to the fully diseased stage (P2). This paper focuses on confidence interval estimation for P2 based on empirical likelihood. Simulation studies are carried out to assess the performance of the new methods compared to the existing parametric and nonparametric ones. A real dataset from Alzheimer's Disease Neuroimaging Initiative (ADNI) is analyzed.

  1. Parsimonious estimation of sex-specific map distances by stepwise maximum likelihood regression

    SciTech Connect

    Fann, C.S.J.; Ott, J.

    1995-10-10

    In human genetic maps, differences between female (x{sub f}) and male (x{sub m}) map distances may be characterized by the ratio, R = x{sub f}/x{sub m}, or the relative difference, Q = (x{sub f} - x{sub m})/(x{sub f} + x{sub m}) = (R - 1)/(R + 1). For a map of genetic markers spread along a chromosome, Q(d) may be viewed as a graph of Q versus the midpoints, d, of the map intervals. To estimate male and female map distances for each interval, a novel method is proposed to evaluate the most parsimonious trend of Q(d) along the chromosome, where Q(d) is expressed as a polynomial in d. Stepwise maximum likelihood polynomial regression of Q is described. The procedure has been implemented in a FORTRAN program package, TREND, and is applied to data on chromosome 18. 11 refs., 2 figs., 3 tabs.

  2. Raw Data Maximum Likelihood Estimation for Common Principal Component Models: A State Space Approach.

    PubMed

    Gu, Fei; Wu, Hao

    2016-09-01

    The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.

  3. The Benefits of Maximum Likelihood Estimators in Predicting Bulk Permeability and Upscaling Fracture Networks

    NASA Astrophysics Data System (ADS)

    Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca

    2016-04-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed

  4. Semiparametric Estimation of the Impacts of Longitudinal Interventions on Adolescent Obesity using Targeted Maximum-Likelihood: Accessible Estimation with the ltmle Package.

    PubMed

    Decker, Anna L; Hubbard, Alan; Crespi, Catherine M; Seto, Edmund Y W; Wang, May C

    2014-03-01

    While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated

  5. A likelihood framework for joint estimation of salmon abundance and migratory timing using telemetric mark-recapture

    USGS Publications Warehouse

    Bromaghin, Jeffrey; Gates, Kenneth S.; Palmer, Douglas E.

    2010-01-01

    Many fisheries for Pacific salmon Oncorhynchus spp. are actively managed to meet escapement goal objectives. In fisheries where the demand for surplus production is high, an extensive assessment program is needed to achieve the opposing objectives of allowing adequate escapement and fully exploiting the available surplus. Knowledge of abundance is a critical element of such assessment programs. Abundance estimation using mark—recapture experiments in combination with telemetry has become common in recent years, particularly within Alaskan river systems. Fish are typically captured and marked in the lower river while migrating in aggregations of individuals from multiple populations. Recapture data are obtained using telemetry receivers that are co-located with abundance assessment projects near spawning areas, which provide large sample sizes and information on population-specific mark rates. When recapture data are obtained from multiple populations, unequal mark rates may reflect a violation of the assumption of homogeneous capture probabilities. A common analytical strategy is to test the hypothesis that mark rates are homogeneous and combine all recapture data if the test is not significant. However, mark rates are often low, and a test of homogeneity may lack sufficient power to detect meaningful differences among populations. In addition, differences among mark rates may provide information that could be exploited during parameter estimation. We present a temporally stratified mark—recapture model that permits capture probabilities and migratory timing through the capture area to vary among strata. Abundance information obtained from a subset of populations after the populations have segregated for spawning is jointly modeled with telemetry distribution data by use of a likelihood function. Maximization of the likelihood produces estimates of the abundance and timing of individual populations migrating through the capture area, thus yielding

  6. Maximum likelihood estimation of protein kinetic parameters under weak assumptions from unfolding force spectroscopy experiments

    NASA Astrophysics Data System (ADS)

    Aioanei, Daniel; Samorì, Bruno; Brucale, Marco

    2009-12-01

    Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.

  7. Estimating the likelihood of significant climate change with the NCAR 40-member ensemble

    NASA Astrophysics Data System (ADS)

    Foust, William Eliott

    Increasing greenhouse gas concentrations are changing the radiative forcing on the climate system, and this forcing will be the key driver of climate change over the 21st century. One of the most pressing questions associated with climate change is whether certain aspects of the climate system will change significantly. Climate ensembles are often used to estimate the probability of significant climate change, but they struggle to produce accurate estimates of significant climate change because they sometimes require more realizations than what is feasible to produce. Additionally, the ensemble mean suggests how the climate will respond to an external forcing, but since it filters out the variability, it cannot determine if the response is significant. In this study, the NCAR CCSM 40-member ensemble and a lag-1 autoregressive model (AR1 model) are used to estimate the likelihood that climate trends will be significant. The AR1 model generates an analytic solution for what the distribution of trends should be if the NCAR model was run an infinite number of times. The analytical solution produced by the AR1 model is used to assess the significance of future climate trends. The results of this study demonstrate that an AR1 model can aid in making a probabilistic forecast. Additionally, the results give insight into the certainty of the trends in the surface temperature field, precipitation field, and atmospheric circulation, the probability of climate trends being significant, and whether the significance of climate trends is dependent on the internal variability or anthropogenic forcing.

  8. Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density.

    PubMed

    Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A

    2009-06-01

    We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f(0) = exp varphi(0) where varphi(0) is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log-concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, infinity) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of H(k), the "lower invelope" of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of varphi(0) = log f(0) at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f(0)) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.

  9. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  10. Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)☆

    PubMed Central

    Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther

    2013-01-01

    The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private mutations (beyond the haplogroup level) can be additionally informative allowing for enhanced haplogroup assignment. This is especially relevant in the case of (partial) control region sequences, which are mainly used in forensics. The present study makes three major contributions toward a more reliable, semi-automated estimation of mitochondrial haplogroups. First, a quality-controlled database consisting of 14,990 full mtGenomes downloaded from GenBank was compiled. Together with Phylotree, these mtGenomes serve as a reference database for haplogroup estimates. Second, the concept of fluctuation rates, i.e. a maximum likelihood estimation of the stability of mutations based on 19,171 full control region haplotypes for which raw lane data is available, is presented. Finally, an algorithm for estimating the haplogroup of an mtDNA sequence based on the combined database of full mtGenomes and Phylotree, which also incorporates the empirically determined fluctuation rates, is brought forward. On the basis of examples from the literature and EMPOP, the algorithm is not only validated, but both the strength of this approach and its utility for quality control of mitochondrial haplotypes is also demonstrated. PMID:23948335

  11. Maximum likelihood estimation of proton irradiated field and deposited dose distribution

    SciTech Connect

    Inaniwa, Taku; Kohno, Toshiyuki; Yamagata, Fumiko; Tomitani, Takehiro; Sato, Shinji; Kanazawa, Mitsutaka; Kanai, Tatsuaki; Urakabe, Eriko

    2007-05-15

    In proton therapy, it is important to evaluate the field irradiated with protons and the deposited dose distribution in a patient's body. Positron emitters generated through fragmentation reactions of target nuclei can be used for this purpose. By detecting the annihilation gamma rays from the positron emitters, the annihilation gamma ray distribution can be obtained which has information about the quantities essential to proton therapy. In this study, we performed irradiation experiments with mono-energetic proton beams of 160 MeV and the spread-out Bragg peak beams to three kinds of targets. The annihilation events were detected with a positron camera for 500 s after the irradiation and the annihilation gamma ray distributions were obtained. In order to evaluate the range and the position of distal and proximal edges of the SOBP, the maximum likelihood estimation (MLE) method was applied to the detected distributions. The evaluated values with the MLE method were compared with those estimated from the measured dose distributions. As a result, the ranges were determined with the difference between the MLE range and the experimental range less than 1.0 mm for all targets. For the SOBP beams, the positions of distal edges were determined with the difference less than 1.0 mm. On the other hand, the difference amounted to 7.9 mm for proximal edges.

  12. Maximum likelihood estimation of proton irradiated field and deposited dose distribution.

    PubMed

    Inaniwa, Taku; Kohno, Toshiyuki; Yamagata, Fumiko; Tomitani, Takehiro; Sato, Shinji; Kanazawa, Mitsutaka; Kanai, Tatsuaki; Urakabe, Eriko

    2007-05-01

    In proton therapy, it is important to evaluate the field irradiated with protons and the deposited dose distribution in a patient's body. Positron emitters generated through fragmentation reactions of target nuclei can be used for this purpose. By detecting the annihilation gamma rays from the positron emitters, the annihilation gamma ray distribution can be obtained which has information about the quantities essential to proton therapy. In this study, we performed irradiation experiments with mono-energetic proton beams of 160 MeV and the spread-out Bragg peak beams to three kinds of targets. The annihilation events were detected with a positron camera for 500 s after the irradiation and the annihilation gamma ray distributions were obtained. In order to evaluate the range and the position of distal and proximal edges of the SOBP, the maximum likelihood estimation (MLE) method was applied to the detected distributions. The evaluated values with the MLE method were compared with those estimated from the measured dose distributions. As a result, the ranges were determined with the difference between the MLE range and the experimental range less than 1.0 mm for all targets. For the SOBP beams, the positions of distal edges were determined with the difference less than 1.0 mm. On the other hand, the difference amounted to 7.9 mm for proximal edges.

  13. Maximum Likelihood Estimation of the Broken Power Law Spectral Parameters with Detector Design Applications

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W.

    2002-01-01

    The method of Maximum Likelihood (ML) is used to estimate the spectral parameters of an assumed broken power law energy spectrum from simulated detector responses. This methodology, which requires the complete specificity of all cosmic-ray detector design parameters, is shown to provide approximately unbiased, minimum variance, and normally distributed spectra information for events detected by an instrument having a wide range of commonly used detector response functions. The ML procedure, coupled with the simulated performance of a proposed space-based detector and its planned life cycle, has proved to be of significant value in the design phase of a new science instrument. The procedure helped make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope. This ML methodology is then generalized to estimate broken power law spectral parameters from real cosmic-ray data sets.

  14. Maximum likelihood estimation of vehicle position for outdoor image sensor-based visible light positioning system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiang; Lin, Jiming

    2016-04-01

    Image sensor-based visible light positioning can be applied not only to indoor environments but also to outdoor environments. To determine the performance bounds of the positioning accuracy from the view of statistical optimization for an outdoor image sensor-based visible light positioning system, we analyze and derive the maximum likelihood estimation and corresponding Cramér-Rao lower bounds of vehicle position, under the condition that the observation values of the light-emitting diode (LED) imaging points are affected by white Gaussian noise. For typical parameters of an LED traffic light and in-vehicle camera image sensor, simulation results show that accurate estimates are available, with positioning error generally less than 0.1 m at a communication distance of 30 m between the LED array transmitter and the camera receiver. With the communication distance being constant, the positioning accuracy depends on the number of LEDs used, the focal length of the lens, the pixel size, and the frame rate of the camera receiver.

  15. Qualitative release assessment to estimate the likelihood of henipavirus entering the United Kingdom.

    PubMed

    Snary, Emma L; Ramnial, Vick; Breed, Andrew C; Stephenson, Ben; Field, Hume E; Fooks, Anthony R

    2012-01-01

    The genus Henipavirus includes Hendra virus (HeV) and Nipah virus (NiV), for which fruit bats (particularly those of the genus Pteropus) are considered to be the wildlife reservoir. The recognition of henipaviruses occurring across a wider geographic and host range suggests the possibility of the virus entering the United Kingdom (UK). To estimate the likelihood of henipaviruses entering the UK, a qualitative release assessment was undertaken. To facilitate the release assessment, the world was divided into four zones according to location of outbreaks of henipaviruses, isolation of henipaviruses, proximity to other countries where incidents of henipaviruses have occurred and the distribution of Pteropus spp. fruit bats. From this release assessment, the key findings are that the importation of fruit from Zone 1 and 2 and bat bushmeat from Zone 1 each have a Low annual probability of release of henipaviruses into the UK. Similarly, the importation of bat meat from Zone 2, horses and companion animals from Zone 1 and people travelling from Zone 1 and entering the UK was estimated to pose a Very Low probability of release. The annual probability of release for all other release routes was assessed to be Negligible. It is recommended that the release assessment be periodically re-assessed to reflect changes in knowledge and circumstances over time.

  16. Maximum penalized likelihood estimation in semiparametric mark-recapture-recovery models.

    PubMed

    Michelot, Théo; Langrock, Roland; Kneib, Thomas; King, Ruth

    2016-01-01

    We discuss the semiparametric modeling of mark-recapture-recovery data where the temporal and/or individual variation of model parameters is explained via covariates. Typically, in such analyses a fixed (or mixed) effects parametric model is specified for the relationship between the model parameters and the covariates of interest. In this paper, we discuss the modeling of the relationship via the use of penalized splines, to allow for considerably more flexible functional forms. Corresponding models can be fitted via numerical maximum penalized likelihood estimation, employing cross-validation to choose the smoothing parameters in a data-driven way. Our contribution builds on and extends the existing literature, providing a unified inferential framework for semiparametric mark-recapture-recovery models for open populations, where the interest typically lies in the estimation of survival probabilities. The approach is applied to two real datasets, corresponding to gray herons (Ardea cinerea), where we model the survival probability as a function of environmental condition (a time-varying global covariate), and Soay sheep (Ovis aries), where we model the survival probability as a function of individual weight (a time-varying individual-specific covariate). The proposed semiparametric approach is compared to a standard parametric (logistic) regression and new interesting underlying dynamics are observed in both cases.

  17. Qualitative Release Assessment to Estimate the Likelihood of Henipavirus Entering the United Kingdom

    PubMed Central

    Snary, Emma L.; Ramnial, Vick; Breed, Andrew C.; Stephenson, Ben; Field, Hume E.; Fooks, Anthony R.

    2012-01-01

    The genus Henipavirus includes Hendra virus (HeV) and Nipah virus (NiV), for which fruit bats (particularly those of the genus Pteropus) are considered to be the wildlife reservoir. The recognition of henipaviruses occurring across a wider geographic and host range suggests the possibility of the virus entering the United Kingdom (UK). To estimate the likelihood of henipaviruses entering the UK, a qualitative release assessment was undertaken. To facilitate the release assessment, the world was divided into four zones according to location of outbreaks of henipaviruses, isolation of henipaviruses, proximity to other countries where incidents of henipaviruses have occurred and the distribution of Pteropus spp. fruit bats. From this release assessment, the key findings are that the importation of fruit from Zone 1 and 2 and bat bushmeat from Zone 1 each have a Low annual probability of release of henipaviruses into the UK. Similarly, the importation of bat meat from Zone 2, horses and companion animals from Zone 1 and people travelling from Zone 1 and entering the UK was estimated to pose a Very Low probability of release. The annual probability of release for all other release routes was assessed to be Negligible. It is recommended that the release assessment be periodically re-assessed to reflect changes in knowledge and circumstances over time. PMID:22328916

  18. The Likelihood Function and Likelihood Statistics

    NASA Astrophysics Data System (ADS)

    Robinson, Edward L.

    2016-01-01

    The likelihood function is a necessary component of Bayesian statistics but not of frequentist statistics. The likelihood function can, however, serve as the foundation for an attractive variant of frequentist statistics sometimes called likelihood statistics. We will first discuss the definition and meaning of the likelihood function, giving some examples of its use and abuse - most notably in the so-called prosecutor's fallacy. Maximum likelihood estimation is the aspect of likelihood statistics familiar to most people. When data points are known to have Gaussian probability distributions, maximum likelihood parameter estimation leads directly to least-squares estimation. When the data points have non-Gaussian distributions, least-squares estimation is no longer appropriate. We will show how the maximum likelihood principle leads to logical alternatives to least squares estimation for non-Gaussian distributions, taking the Poisson distribution as an example.The likelihood ratio is the ratio of the likelihoods of, for example, two hypotheses or two parameters. Likelihood ratios can be treated much like un-normalized probability distributions, greatly extending the applicability and utility of likelihood statistics. Likelihood ratios are prone to the same complexities that afflict posterior probability distributions in Bayesian statistics. We will show how meaningful information can be extracted from likelihood ratios by the Laplace approximation, by marginalizing, or by Markov chain Monte Carlo sampling.

  19. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    SciTech Connect

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  20. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  1. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm

    PubMed Central

    Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  2. Maximum Likelihood Estimation of Spectra Information from Multiple Independent Astrophysics Data Sets

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)

    2002-01-01

    The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.

  3. A Block Successive Lower-Bound Maximization Algorithm for the Maximum Pseudo-Likelihood Estimation of Fully Visible Boltzmann Machines.

    PubMed

    Nguyen, Hien D; Wood, Ian A

    2016-03-01

    Maximum pseudo-likelihood estimation (MPLE) is an attractive method for training fully visible Boltzmann machines (FVBMs) due to its computational scalability and the desirable statistical properties of the MPLE. No published algorithms for MPLE have been proven to be convergent or monotonic. In this note, we present an algorithm for the MPLE of FVBMs based on the block successive lower-bound maximization (BSLM) principle. We show that the BSLM algorithm monotonically increases the pseudo-likelihood values and that the sequence of BSLM estimates converges to the unique global maximizer of the pseudo-likelihood function. The relationship between the BSLM algorithm and the gradient ascent (GA) algorithm for MPLE of FVBMs is also discussed, and a convergence criterion for the GA algorithm is given.

  4. Task-based detectability in CT image reconstruction by filtered backprojection and penalized likelihood estimation

    SciTech Connect

    Gang, Grace J.; Stayman, J. Webster; Zbijewski, Wojciech; Siewerdsen, Jeffrey H.

    2014-08-15

    Purpose: Nonstationarity is an important aspect of imaging performance in CT and cone-beam CT (CBCT), especially for systems employing iterative reconstruction. This work presents a theoretical framework for both filtered-backprojection (FBP) and penalized-likelihood (PL) reconstruction that includes explicit descriptions of nonstationary noise, spatial resolution, and task-based detectability index. Potential utility of the model was demonstrated in the optimal selection of regularization parameters in PL reconstruction. Methods: Analytical models for local modulation transfer function (MTF) and noise-power spectrum (NPS) were investigated for both FBP and PL reconstruction, including explicit dependence on the object and spatial location. For FBP, a cascaded systems analysis framework was adapted to account for nonstationarity by separately calculating fluence and system gains for each ray passing through any given voxel. For PL, the point-spread function and covariance were derived using the implicit function theorem and first-order Taylor expansion according toFessler [“Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography,” IEEE Trans. Image Process. 5(3), 493–506 (1996)]. Detectability index was calculated for a variety of simple tasks. The model for PL was used in selecting the regularization strength parameter to optimize task-based performance, with both a constant and a spatially varying regularization map. Results: Theoretical models of FBP and PL were validated in 2D simulated fan-beam data and found to yield accurate predictions of local MTF and NPS as a function of the object and the spatial location. The NPS for both FBP and PL exhibit similar anisotropic nature depending on the pathlength (and therefore, the object and spatial location within the object) traversed by each ray, with the PL NPS experiencing greater smoothing along directions with higher noise. The MTF of FBP

  5. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach

    PubMed Central

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations. PMID:26474313

  6. Efficient Full Information Maximum Likelihood Estimation for Multidimensional IRT Models. Research Report. ETS RR-09-03

    ERIC Educational Resources Information Center

    Rijmen, Frank

    2009-01-01

    Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…

  7. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  8. Bayesian Analysis Using a Simple Likelihood Model Outperforms Parsimony for Estimation of Phylogeny from Discrete Morphological Data

    PubMed Central

    Wright, April M.; Hillis, David M.

    2014-01-01

    Despite the introduction of likelihood-based methods for estimating phylogenetic trees from phenotypic data, parsimony remains the most widely-used optimality criterion for building trees from discrete morphological data. However, it has been known for decades that there are regions of solution space in which parsimony is a poor estimator of tree topology. Numerous software implementations of likelihood-based models for the estimation of phylogeny from discrete morphological data exist, especially for the Mk model of discrete character evolution. Here we explore the efficacy of Bayesian estimation of phylogeny, using the Mk model, under conditions that are commonly encountered in paleontological studies. Using simulated data, we describe the relative performances of parsimony and the Mk model under a range of realistic conditions that include common scenarios of missing data and rate heterogeneity. PMID:25279853

  9. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  10. Anatomical likelihood estimation meta-analysis of grey and white matter anomalies in autism spectrum disorders

    PubMed Central

    DeRamus, Thomas P.; Kana, Rajesh K.

    2014-01-01

    Autism spectrum disorders (ASD) are characterized by impairments in social communication and restrictive, repetitive behaviors. While behavioral symptoms are well-documented, investigations into the neurobiological underpinnings of ASD have not resulted in firm biomarkers. Variability in findings across structural neuroimaging studies has contributed to difficulty in reliably characterizing the brain morphology of individuals with ASD. These inconsistencies may also arise from the heterogeneity of ASD, and wider age-range of participants included in MRI studies and in previous meta-analyses. To address this, the current study used coordinate-based anatomical likelihood estimation (ALE) analysis of 21 voxel-based morphometry (VBM) studies examining high-functioning individuals with ASD, resulting in a meta-analysis of 1055 participants (506 ASD, and 549 typically developing individuals). Results consisted of grey, white, and global differences in cortical matter between the groups. Modeled anatomical maps consisting of concentration, thickness, and volume metrics of grey and white matter revealed clusters suggesting age-related decreases in grey and white matter in parietal and inferior temporal regions of the brain in ASD, and age-related increases in grey matter in frontal and anterior-temporal regions. White matter alterations included fiber tracts thought to play key roles in information processing and sensory integration. Many current theories of pathobiology ASD suggest that the brains of individuals with ASD may have less-functional long-range (anterior-to-posterior) connections. Our findings of decreased cortical matter in parietal–temporal and occipital regions, and thickening in frontal cortices in older adults with ASD may entail altered cortical anatomy, and neurodevelopmental adaptations. PMID:25844306

  11. Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.

    PubMed

    Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène

    2016-07-01

    Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific

  12. The Anatomy of First-Episode and Chronic Schizophrenia: An Anatomical Likelihood Estimation Meta-Analysis

    PubMed Central

    Ellison-Wright, Ian; Glahn, David C.; Laird, Angela R.; Thelen, Sarah M.; Bullmore, Ed

    2010-01-01

    Objective The authors sought to map gray matter changes in first-episode schizophrenia and to compare these with the changes in chronic schizophrenia. They postulated that the data would show a progression of changes from hippocampal deficits in first-episode schizophrenia to include volume reductions in the amygdala and cortical gray matter in chronic schizophrenia. Method A systematic search was conducted for voxel-based structural MRI studies of patients with first-episode schizophrenia and chronic schizophrenia in relation to comparison groups. Meta-analyses of the coordinates of gray matter differences were carried out using anatomical likelihood estimation. Maps of gray matter changes were constructed, and subtraction meta-analysis was used to compare them. Results A total of 27 articles were identified for inclusion in the meta-analyses. A marked correspondence was observed in regions affected by both first-episode schizophrenia and chronic schizophrenia, including gray matter decreases in the thalamus, the left uncus/amygdala region, the insula bilaterally, and the anterior cingulate. In the comparison of first-episode schizophrenia and chronic schizophrenia, decreases in gray matter volume were detected in first-episode schizophrenia but not in chronic schizophrenia in the caudate head bilaterally; decreases were more widespread in cortical regions in chronic schizophrenia. Conclusions Anatomical changes in first-episode schizophrenia broadly coincide with a basal ganglia-thalamocortical circuit. These changes include bilateral reductions in caudate head gray matter, which are absent in chronic schizophrenia. Comparing first-episode schizophrenia and chronic schizophrenia, the authors did not find evidence for the temporolimbic progression of pathology from hippocampus to amygdala, but there was evidence for progression of cortical changes. PMID:18381902

  13. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  14. Anatomical likelihood estimation meta-analysis of grey and white matter anomalies in autism spectrum disorders.

    PubMed

    DeRamus, Thomas P; Kana, Rajesh K

    2015-01-01

    Autism spectrum disorders (ASD) are characterized by impairments in social communication and restrictive, repetitive behaviors. While behavioral symptoms are well-documented, investigations into the neurobiological underpinnings of ASD have not resulted in firm biomarkers. Variability in findings across structural neuroimaging studies has contributed to difficulty in reliably characterizing the brain morphology of individuals with ASD. These inconsistencies may also arise from the heterogeneity of ASD, and wider age-range of participants included in MRI studies and in previous meta-analyses. To address this, the current study used coordinate-based anatomical likelihood estimation (ALE) analysis of 21 voxel-based morphometry (VBM) studies examining high-functioning individuals with ASD, resulting in a meta-analysis of 1055 participants (506 ASD, and 549 typically developing individuals). Results consisted of grey, white, and global differences in cortical matter between the groups. Modeled anatomical maps consisting of concentration, thickness, and volume metrics of grey and white matter revealed clusters suggesting age-related decreases in grey and white matter in parietal and inferior temporal regions of the brain in ASD, and age-related increases in grey matter in frontal and anterior-temporal regions. White matter alterations included fiber tracts thought to play key roles in information processing and sensory integration. Many current theories of pathobiology ASD suggest that the brains of individuals with ASD may have less-functional long-range (anterior-to-posterior) connections. Our findings of decreased cortical matter in parietal-temporal and occipital regions, and thickening in frontal cortices in older adults with ASD may entail altered cortical anatomy, and neurodevelopmental adaptations.

  15. Maximum-likelihood and markov chain monte carlo approaches to estimate inbreeding and effective size from allele frequency changes.

    PubMed Central

    Laval, Guillaume; SanCristobal, Magali; Chevalet, Claude

    2003-01-01

    Maximum-likelihood and Bayesian (MCMC algorithm) estimates of the increase of the Wright-Malécot inbreeding coefficient, F(t), between two temporally spaced samples, were developed from the Dirichlet approximation of allelic frequency distribution (model MD) and from the admixture of the Dirichlet approximation and the probabilities of fixation and loss of alleles (model MDL). Their accuracy was tested using computer simulations in which F(t) = 10% or less. The maximum-likelihood method based on the model MDL was found to be the best estimate of F(t) provided that initial frequencies are known exactly. When founder frequencies are estimated from a limited set of founder animals, only the estimates based on the model MD can be used for the moment. In this case no method was found to be the best in all situations investigated. The likelihood and Bayesian approaches give better results than the classical F-statistics when markers exhibiting a low polymorphism (such as the SNP markers) are used. Concerning the estimations of the effective population size all the new estimates presented here were found to be better than the F-statistics classically used. PMID:12871924

  16. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  17. Estimation of the treatment effect under an incomplete block crossover design in binary data - A conditional likelihood approach.

    PubMed

    Lui, Kung-Jong

    2015-07-15

    A random effects logistic regression model is proposed for an incomplete block crossover trial comparing three treatments when the underlying patient response is dichotomous. On the basis of the conditional distributions, the conditional maximum likelihood estimator for the relative effect between treatments and its estimated asymptotic standard error are derived. Asymptotic interval estimator and exact interval estimator are also developed. Monte Carlo simulation is used to evaluate the performance of these estimators. Both asymptotic and exact interval estimators are found to perform well in a variety of situations. When the number of patients is small, the exact interval estimator with assuring the coverage probability larger than or equal to the desired confidence level can be especially of use. The data taken from a crossover trial comparing the low and high doses of an analgesic with a placebo for the relief of pain in primary dysmenorrhea are used to illustrate the use of estimators and the potential usefulness of the incomplete block crossover design.

  18. dPIRPLE: a joint estimation framework for deformable registration and penalized-likelihood CT image reconstruction using prior images

    NASA Astrophysics Data System (ADS)

    Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.

    2014-09-01

    Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and

  19. dPIRPLE: A Joint Estimation Framework for Deformable Registration and Penalized-Likelihood CT Image Reconstruction using Prior Images

    PubMed Central

    Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.

    2014-01-01

    Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc.). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration

  20. Reconstruction of difference in sequential CT studies using penalized likelihood estimation

    PubMed Central

    Pourmorteza, A; Dang, H; Siewerdsen, J H; Stayman, J W

    2016-01-01

    Characterization of anatomical change and other differences is important in sequential computed tomography (CT) imaging, where a high-fidelity patient-specific prior image is typically present, but is not used, in the reconstruction of subsequent anatomical states. Here, we introduce a penalized likelihood (PL) method called reconstruction of difference (RoD) to directly reconstruct a difference image volume using both the current projection data and the (unregistered) prior image integrated into the forward model for the measurement data. The algorithm utilizes an alternating minimization to find both the registration and reconstruction estimates. This formulation allows direct control over the image properties of the difference image, permitting regularization strategies that inhibit noise and structural differences due to inconsistencies between the prior image and the current data.Additionally, if the change is known to be local, RoD allows local acquisition and reconstruction, as opposed to traditional model-based approaches that require a full support field of view (or other modifications). We compared the performance of RoD to a standard PL algorithm, in simulation studies and using test-bench cone-beam CT data. The performances of local and global RoD approaches were similar, with local RoD providing a significant computational speedup. In comparison across a range of data with differing fidelity, the local RoD approach consistently showed lower error (with respect to a truth image) than PL in both noisy data and sparsely sampled projection scenarios. In a study of the prior image registration performance of RoD, a clinically reasonable capture ranges were demonstrated. Lastly, the registration algorithm had a broad capture range and the error for reconstruction of CT data was 35% and 20% less than filtered back-projection for RoD and PL, respectively. The RoD has potential for delivering high-quality difference images in a range of sequential clinical

  1. Asymptotic efficiency of the pseudo-maximum likelihood estimator in multi-group factor models with pooled data.

    PubMed

    Jin, Shaobo; Yang-Wallentin, Fan; Christoffersson, Anders

    2015-05-15

    A multi-group factor model is suitable for data originating from different strata. However, it often requires a relatively large sample size to avoid numerical issues such as non-convergence and non-positive definite covariance matrices. An alternative is to pool data from different groups in which a single-group factor model is fitted to the pooled data using maximum likelihood. In this paper, properties of pseudo-maximum likelihood (PML) estimators for pooled data are studied. The pooled data are assumed to be normally distributed from a single group. The resulting asymptotic efficiency of the PML estimators of factor loadings is compared with that of the multi-group maximum likelihood estimators. The effect of pooling is investigated through a two-group factor model. The variances of factor loadings for the pooled data are underestimated under the normal theory when error variances in the smaller group are larger. Underestimation is due to dependence between the pooled factors and pooled error terms. Small-sample properties of the PML estimators are also investigated using a Monte Carlo study.

  2. Lateral stability and control derivatives of a jet fighter airplane extracted from flight test data by utilizing maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Steinmetz, G. G.

    1972-01-01

    A method of parameter extraction for stability and control derivatives of aircraft from flight test data, implementing maximum likelihood estimation, has been developed and successfully applied to actual lateral flight test data from a modern sophisticated jet fighter. This application demonstrates the important role played by the analyst in combining engineering judgment and estimator statistics to yield meaningful results. During the analysis, the problems of uniqueness of the extracted set of parameters and of longitudinal coupling effects were encountered and resolved. The results for all flight runs are presented in tabular form and as time history comparisons between the estimated states and the actual flight test data.

  3. Odds ratio for 2 × 2 tables: Mantel-Haenszel estimator, profile likelihood, and presence of surrogate responses.

    PubMed

    Banerjee, Buddhananda; Biswas, Atanu

    2014-01-01

    Use of surrogate outcome to improve the inference in biomedical problems is an area of growing interest. Here, we consider a setup where both the true and surrogate endpoints are binary and we observe all the surrogate endpoints along with a few true endpoints. In a two-treatment setup we study the surrogate-augmented Mantel-Haenszel estimator based on observations from different groups when the group effect is present. We compare the Mantel-Haenszel estimator with the one obtained by maximizing profile likelihood in a surrogate augmented setup. We observe that the performances of these estimators are very close.

  4. Maximum likelihood estimate of target angles for a conical scan tracking system in the presence of speckle.

    PubMed

    Lubnau, D G

    1977-01-01

    The equation for the maximum likelihood estimate of target angle is derived for a conical scan tracking system when the target produces speckle and Gaussian noise is present. Operation with a direct detection receiver is assumed with the average photon flux large enough so that the discrete nature of photoelectric events may be ignored. For large average SNRs, the estimate is shown to be unbiased and the variance of the estimate limited by both the average SNR and the number of degrees of freedom of the detected field.

  5. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  6. Data cloning: easy maximum likelihood estimation for complex ecological models using Bayesian Markov chain Monte Carlo methods.

    PubMed

    Lele, Subhash R; Dennis, Brian; Lutscher, Frithjof

    2007-07-01

    We introduce a new statistical computing method, called data cloning, to calculate maximum likelihood estimates and their standard errors for complex ecological models. Although the method uses the Bayesian framework and exploits the computational simplicity of the Markov chain Monte Carlo (MCMC) algorithms, it provides valid frequentist inferences such as the maximum likelihood estimates and their standard errors. The inferences are completely invariant to the choice of the prior distributions and therefore avoid the inherent subjectivity of the Bayesian approach. The data cloning method is easily implemented using standard MCMC software. Data cloning is particularly useful for analysing ecological situations in which hierarchical statistical models, such as state-space models and mixed effects models, are appropriate. We illustrate the method by fitting two nonlinear population dynamics models to data in the presence of process and observation noise.

  7. Noise Estimation and Reduction in Magnetic Resonance Imaging Using a New Multispectral Nonlocal Maximum-likelihood Filter.

    PubMed

    Bouhrara, Mustapha; Bonny, Jean-Marie; Ashinsky, Beth G; Maring, Michael C; Spencer, Richard G

    2017-01-01

    Denoising of magnetic resonance (MR) images enhances diagnostic accuracy, the quality of image manipulations such as registration and segmentation, and parameter estimation. The first objective of this paper is to introduce a new, high-performance, nonlocal filter for noise reduction in MR image sets consisting of progressively-weighted, that is, multispectral, images. This filter is a multispectral extension of the nonlocal maximum likelihood filter (NLML). Performance was evaluated on synthetic and in-vivo T2 - and T1 -weighted brain imaging data, and compared to the nonlocal-means (NLM) and its multispectral version, that is, MS-NLM, and the nonlocal maximum likelihood (NLML) filters. Visual inspection of filtered images and quantitative analyses showed that all filters provided substantial reduction of noise. Further, as expected, the use of multispectral information improves filtering quality. In addition, numerical and experimental analyses indicated that the new multispectral NLML filter, MS-NLML, demonstrated markedly less blurring and loss of image detail than seen with the other filters evaluated. In addition, since noise standard deviation (SD) is an important parameter for all of these nonlocal filters, a multispectral extension of the method of maximum likelihood estimation (MLE) of noise amplitude is presented and compared to both local and nonlocal MLE methods. Numerical and experimental analyses indicated the superior performance of this multispectral method for estimation of noise SD.

  8. An Algorithm for the Computation of Generalized Likelihood or Self-Critical Estimators for Binary Data.

    DTIC Science & Technology

    1986-01-01

    Polyteci IUtba DTIC INSIECTED p; 6 5% Js. * - r . S- .rm r..,w-rg J wrr T Ir: V, W v WVr W w r f WV rWWwWV my~f w SM w%%ir I Doo~msntatlan for Self...distributiJon, logistic di iin a-,ssian distribution, generalized likelihood, model- critica . analys is, regression models, parametric proportional hazard

  9. Collaborative targeted maximum likelihood estimation for variable importance measure: Illustration for functional outcome prediction in mild traumatic brain injuries.

    PubMed

    Pirracchio, Romain; Yue, John K; Manley, Geoffrey T; van der Laan, Mark J; Hubbard, Alan E

    2016-06-29

    Standard statistical practice used for determining the relative importance of competing causes of disease typically relies on ad hoc methods, often byproducts of machine learning procedures (stepwise regression, random forest, etc.). Causal inference framework and data-adaptive methods may help to tailor parameters to match the clinical question and free one from arbitrary modeling assumptions. Our focus is on implementations of such semiparametric methods for a variable importance measure (VIM). We propose a fully automated procedure for VIM based on collaborative targeted maximum likelihood estimation (cTMLE), a method that optimizes the estimate of an association in the presence of potentially numerous competing causes. We applied the approach to data collected from traumatic brain injury patients, specifically a prospective, observational study including three US Level-1 trauma centers. The primary outcome was a disability score (Glasgow Outcome Scale - Extended (GOSE)) collected three months post-injury. We identified clinically important predictors among a set of risk factors using a variable importance analysis based on targeted maximum likelihood estimators (TMLE) and on cTMLE. Via a parametric bootstrap, we demonstrate that the latter procedure has the potential for robust automated estimation of variable importance measures based upon machine-learning algorithms. The cTMLE estimator was associated with substantially less positivity bias as compared to TMLE and larger coverage of the 95% CI. This study confirms the power of an automated cTMLE procedure that can target model selection via machine learning to estimate VIMs in complicated, high-dimensional data.

  10. Stochastic capture zone analysis of an arsenic-contaminated well using the generalized likelihood uncertainty estimator (GLUE) methodology

    NASA Astrophysics Data System (ADS)

    Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro

    2003-06-01

    In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.

  11. Likelihood-based estimation of the effective population size using temporal changes in allele frequencies: a genealogical approach.

    PubMed Central

    Berthier, Pierre; Beaumont, Mark A; Cornuet, Jean-Marie; Luikart, Gordon

    2002-01-01

    A new genetic estimator of the effective population size (N(e)) is introduced. This likelihood-based (LB) estimator uses two temporally spaced genetic samples of individuals from a population. We compared its performance to that of the classical F-statistic-based N(e) estimator (N(eFk)) by using data from simulated populations with known N(e) and real populations. The new likelihood-based estimator (N(eLB)) showed narrower credible intervals and greater accuracy than (N(eFk)) when genetic drift was strong, but performed only slightly better when genetic drift was relatively weak. When drift was strong (e.g., N(e) = 20 for five generations), as few as approximately 10 loci (heterozygosity of 0.6; samples of 30 individuals) are sufficient to consistently achieve credible intervals with an upper limit <50 using the LB method. In contrast, approximately 20 loci are required for the same precision when using the classical F-statistic approach. The N(eLB) estimator is much improved over the classical method when there are many rare alleles. It will be especially useful in conservation biology because it less often overestimates N(e) than does N(eLB) and thus is less likely to erroneously suggest that a population is large and has a low extinction risk. PMID:11861575

  12. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    SciTech Connect

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  13. Maximum likelihood estimation of the parameters and quantiles of the general extreme-value distribution from censored samples

    NASA Astrophysics Data System (ADS)

    Phien, Huynh Ngoc; Fang, Tsu-Shang Emma

    1989-01-01

    The General Extreme Value (GEV) distribution has become increasingly popular, as has the use of historic information, in flood frequency analysis during recent years. Both call for a systematic investigation of the properties of the maximum likelihood (ML) estimators obtained from censored samples. In this study, such an investigation was made for the type-1 censoring believed to be more frequently encountered in practical situations. All the mathematical equations needed for obtaining the ML estimators of the parameters and the quantiles (represented by the T- year event) were derived and Monte Carlo experiments were carried out to determine their sampling properties. It was found that censoring may reduce the bias of the parameter estimators but does not necessarily increase the variances. It was also found that the variances-covariances of the parameter estimators, and hence the variance of the T- year event, are better approximated by using the observed rather than the Fisher information matrix.

  14. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  15. Activities: Visualization, Estimation, Computation.

    ERIC Educational Resources Information Center

    Maletsky, Evan M.

    1982-01-01

    The material is designed to help students build a cone model, visualize how its dimensions change as its shape changes, estimate maximum volume position, and develop problem-solving skills. Worksheets designed for duplication for classroom use are included. Part of the activity involves student analysis of a BASIC program. (MP)

  16. Maximum Likelihood, Profile Likelihood, and Penalized Likelihood: A Primer

    PubMed Central

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander

    2014-01-01

    The method of maximum likelihood is widely used in epidemiology, yet many epidemiologists receive little or no education in the conceptual underpinnings of the approach. Here we provide a primer on maximum likelihood and some important extensions which have proven useful in epidemiologic research, and which reveal connections between maximum likelihood and Bayesian methods. For a given data set and probability model, maximum likelihood finds values of the model parameters that give the observed data the highest probability. As with all inferential statistical methods, maximum likelihood is based on an assumed model and cannot account for bias sources that are not controlled by the model or the study design. Maximum likelihood is nonetheless popular, because it is computationally straightforward and intuitive and because maximum likelihood estimators have desirable large-sample properties in the (largely fictitious) case in which the model has been correctly specified. Here, we work through an example to illustrate the mechanics of maximum likelihood estimation and indicate how improvements can be made easily with commercial software. We then describe recent extensions and generalizations which are better suited to observational health research and which should arguably replace standard maximum likelihood as the default method. PMID:24173548

  17. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    PubMed

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE1 and MLE2, respectively), and Greenwood approximation (MLEgw) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE1, the MLE2 and MLEgw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE2 and MLEgw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE2 and MLEgw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization.

  18. Parametrically guided estimation in nonparametric varying coefficient models with quasi-likelihood

    PubMed Central

    Davenport, Clemontina A.; Maity, Arnab; Wu, Yichao

    2015-01-01

    Varying coefficient models allow us to generalize standard linear regression models to incorporate complex covariate effects by modeling the regression coefficients as functions of another covariate. For nonparametric varying coefficients, we can borrow the idea of parametrically guided estimation to improve asymptotic bias. In this paper, we develop a guided estimation procedure for the nonparametric varying coefficient models. Asymptotic properties are established for the guided estimators and a method of bandwidth selection via bias-variance tradeoff is proposed. We compare the performance of the guided estimator with that of the unguided estimator via both simulation and real data examples. PMID:26146469

  19. Combining Classifiers Using Their Receiver Operating Characteristics and Maximum Likelihood Estimation*

    PubMed Central

    Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.

    2010-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  20. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  1. PROCOV: maximum likelihood estimation of protein phylogeny under covarion models and site-specific covarion pattern analysis

    PubMed Central

    Wang, Huai-Chun; Susko, Edward; Roger, Andrew J

    2009-01-01

    Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs). Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis) is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS) process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees. PMID:19737395

  2. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  3. Maximum-likelihood estimation of familial correlations from multivariate quantitative data on pedigrees: a general method and examples.

    PubMed Central

    Rao, D C; Vogler, G P; McGue, M; Russell, J M

    1987-01-01

    A general method for maximum-likelihood estimation of familial correlations from pedigree data is presented. The method is applicable to any type of data structure, including pedigrees in which variable numbers of individuals are present within classes of relatives, data in which multiple phenotypic measures are obtained on each individual, and multiple group analyses in which some correlations are equated across groups. The method is applied to data on high-density lipoprotein cholesterol and total cholesterol levels obtained from participants in the Swedish Twin Family Study. Results indicate that there is strong familial resemblance for both traits but little cross-trait resemblance. PMID:3687943

  4. Empirical models for estimating baseline streamflows in California and their likelihood of anthropogenic modification

    USGS Publications Warehouse

    Carlisle, Daren M.; Wolock, David M.; Howard, Jeannette K.; Grantham, Theodore E.; Fesenmyer, Kurt; Wieczorek, Michael

    2016-01-01

    The dataset contain estimates of natural monthly streamflow for 135,118 stream segments in California, USA, from 1950 to 2012. These estimates were made using statistical models described in Carlisle and others, 2016, Open File Report 2016-1189.  Segments are identified per the medium resolution National Hydrography Dataset (NHD), Version 1. The dataset also contains observed monthly streamflows and estimates of natural monthly streamflows for 894 USGS stream gages in California, USA.

  5. Thinking Concretely Increases the Perceived Likelihood of Risks: The Effect of Construal Level on Risk Estimation.

    PubMed

    Lermer, Eva; Streicher, Bernhard; Sachs, Rainer; Raue, Martina; Frey, Dieter

    2016-03-01

    Recent findings on construal level theory (CLT) suggest that abstract thinking leads to a lower estimated probability of an event occurring compared to concrete thinking. We applied this idea to the risk context and explored the influence of construal level (CL) on the overestimation of small and underestimation of large probabilities for risk estimates concerning a vague target person (Study 1 and Study 3) and personal risk estimates (Study 2). We were specifically interested in whether the often-found overestimation of small probabilities could be reduced with abstract thinking, and the often-found underestimation of large probabilities was reduced with concrete thinking. The results showed that CL influenced risk estimates. In particular, a concrete mindset led to higher risk estimates compared to an abstract mindset for several adverse events, including events with small and large probabilities. This suggests that CL manipulation can indeed be used for improving the accuracy of lay people's estimates of small and large probabilities. Moreover, the results suggest that professional risk managers' risk estimates of common events (thus with a relatively high probability) could be improved by adopting a concrete mindset. However, the abstract manipulation did not lead managers to estimate extremely unlikely events more accurately. Potential reasons for different CL manipulation effects on risk estimates' accuracy between lay people and risk managers are discussed.

  6. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  7. Estimating the likelihood of an eruption from a volcano with missing onsets in its record

    NASA Astrophysics Data System (ADS)

    Wang, Ting; Bebbington, Mark

    2012-10-01

    Historical eruption records are often incomplete, a problem which is exacerbated in dealing with catalogs derived from geologic records. We examine the problem of estimating the true (adjusted for missing observations) parameters and hence the hazard in a Weibull (or gamma) renewal model, which is commonly used to model time series of volcanic onsets. Robust regression, robust estimation using repeated medians, and results from the theory of inverses of thinned renewal processes failed to provide consistent estimates from simulated data. Hence we adopted a hidden Markov model framework, where the hidden state is a reflection of the number of missing onsets. This also allows for the completeness level of the record to be estimated, and offers a means of determining where in the record the missing observations are likely to be found. Tested on data from the Holocene record of Mt Taranaki, the preliminary estimates of completeness are 86-87% complete (record 7 ka — present) and 78-80% complete beyond that. These figures were independently verified using a model of tephra dispersal. The estimated present hazard is approximately 20% higher than estimated without allowing for missing data.

  8. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation.

    PubMed

    Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program.

  9. Likelihood parameter estimation for calibrating a soil moisture using radar backscatter

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assimilating soil moisture information contained in synthetic aperture radar imagery into land surface model predictions can be done using a calibration, or parameter estimation, approach. The presence of speckle, however, necessitates aggregating backscatter measurements over large land areas in or...

  10. A Path Following Algorithm for Sparse Pseudo-Likelihood Inverse Covariance Estimation (SPLICE)

    DTIC Science & Technology

    2008-07-24

    p, k = j + 1, . . . , p (19) Using convexity , The Karush-Kuhn-Tucker ( KKT ) conditions are necessary and sufficient to characterize a solution of...of large covariance matrices. Ledoit and Wolf (2004) present a shrinkage estimator that is the asymptotically optimal convex linear com- bination of...consistently estimated when the number of variables p is non -negligible in the comparison to the sample size n. As one example, it is a well-known fact that

  11. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  12. Reliable and computationally efficient maximum-likelihood estimation of “proper” binormal ROC curves

    PubMed Central

    Pesce, Lorenzo L.; Metz, Charles E.

    2009-01-01

    Rationale and Objectives Estimation of ROC curves and their associated indices from experimental data can be problematic, especially in multi-reader, multi-case (MRMC) observer studies. Wilcoxon estimates of area under the curve (AUC) can be strongly biased with categorical data, whereas the conventional binormal ROC curve-fitting model may produce unrealistic fits. The “proper” binormal model (PBM) was introduced by Metz and Pan (1) to provide acceptable fits for both sturdy and problematic datasets, but other investigators found that its first software implementation was numerically unstable in some situations (2). Therefore, we created an entirely new algorithm to implement the PBM. Materials and Methods This paper describes in detail the new PBM curve-fitting algorithm, which was designed to perform successfully in all problematic situations encountered previously. Extensive testing was conducted also on a broad variety of simulated and real datasets. Windows, Linux, and Apple Macintosh OS X versions of the algorithm are available online at http://xray.bsd.uchicago.edu/krl/. Results Plots of fitted curves as well as summaries of AUC estimates and their standard errors are reported. The new algorithm never failed to converge and produced good fits for all of the several million datasets on which it was tested. For all but the most problematic datasets, the algorithm also produced very good estimates of AUC standard error. The AUC estimates compared well with Wilcoxon estimates for continuously -distributed data and are expected to be superior for categorical data. Conclusion This implementation of the PBM is reliable in a wide variety of ROC curve-fitting tasks. PMID:17574132

  13. Process for estimating likelihood and confidence in post detonation nuclear forensics.

    SciTech Connect

    Darby, John L.; Craft, Charles M.

    2014-07-01

    Technical nuclear forensics (TNF) must provide answers to questions of concern to the broader community, including an estimate of uncertainty. There is significant uncertainty associated with post-detonation TNF. The uncertainty consists of a great deal of epistemic (state of knowledge) as well as aleatory (random) uncertainty, and many of the variables of interest are linguistic (words) and not numeric. We provide a process by which TNF experts can structure their process for answering questions and provide an estimate of uncertainty. The process uses belief and plausibility, fuzzy sets, and approximate reasoning.

  14. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate

  15. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    SciTech Connect

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts to a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.

  16. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  17. The Undiscovered Country: Can We Estimate the Likelihood of Extrasolar Planetary Habitability?

    NASA Astrophysics Data System (ADS)

    Unterborn, C. T.; Panero, W. R.; Hull, S. D.

    2015-12-01

    Plate tectonics have operated on Earth for a majority of its lifetime. Tectonics regulates atmospheric carbon and creates a planetary-scale water cycle, and is a primary factor in the Earth being habitable. While the mechanism for initiating tectonics is unknown, as we expand our search for habitable worlds, understanding which planetary compositions produce planets capable of supporting long-term tectonics is of paramount importance. On Earth, this sustentation of tectonics is a function of both its structure and composition. Currently, however, we have no method to measure the interior composition of exoplanets. In our Solar system, though, Solar abundances for refractory elements mirror the Earth's to within ~10%, allowing the adoption of Solar abundances as proxies for Earth's. It is not known, however, whether this mirroring of stellar and terrestrial planet abundances holds true for other star-planet systems without determination of the composition of initial planetesimals via condensation sequence calculations. Currently, all code for ascertaining these sequences are commercially available or closed-source. We present, then, the open-source Arbitrary Composition Condensation Sequence calculator (ArCCoS) for converting the elemental composition of a parent star to that of the planet-building material as well as the extent of oxidation within the planetesimals. These data allow us to constrain the likelihood for one of the main drivers for plate tectonics: the basalt to eclogite transition subducting plates. Unlike basalt, eclogite is denser than the surrounding mantle and thus sinks into the mantle, pulling the overlying slab with it. Without this higher density relative to the mantle, plates stagnate at shallow depths, shutting off plate tectonics. Using the results of ArCCoS as abundance inputs into the MELTS and HeFESTo thermodynamic models, we calculate phase relations for the first basaltic crust and depleted mantle of a terrestrial planet produced from

  18. A Unified Maximum Likelihood Framework for Simultaneous Motion and $T_{1}$ Estimation in Quantitative MR $T_{1}$ Mapping.

    PubMed

    Ramos-Llorden, Gabriel; den Dekker, Arnold J; Van Steenkiste, Gwendolyn; Jeurissen, Ben; Vanhevel, Floris; Van Audekerke, Johan; Verhoye, Marleen; Sijbers, Jan

    2017-02-01

    In quantitative MR T1 mapping, the spin-lattice relaxation time T1 of tissues is estimated from a series of T1 -weighted images. As the T1 estimation is a voxel-wise estimation procedure, correct spatial alignment of the T1 -weighted images is crucial. Conventionally, the T1 -weighted images are first registered based on a general-purpose registration metric, after which the T1 map is estimated. However, as demonstrated in this paper, such a two-step approach leads to a bias in the final T1 map. In our work, instead of considering motion correction as a preprocessing step, we recover the motion-free T1 map using a unified estimation approach. In particular, we propose a unified framework where the motion parameters and the T1 map are simultaneously estimated with a Maximum Likelihood (ML) estimator. With our framework, the relaxation model, the motion model as well as the data statistics are jointly incorporated to provide substantially more accurate motion and T1 parameter estimates. Experiments with realistic Monte Carlo simulations show that the proposed unified ML framework outperforms the conventional two-step approach as well as state-of-the-art model-based approaches, in terms of both motion and T1 map accuracy and mean-square error. Furthermore, the proposed method was additionally validated in a controlled experiment with real T1 -weighted data and with two in vivo human brain T1 -weighted data sets, showing its applicability in real-life scenarios.

  19. A flexible decision-aided maximum likelihood phase estimation in hybrid QPSK/OOK coherent optical WDM systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Yulong

    2016-04-01

    Although decision-aided (DA) maximum likelihood (ML) phase estimation (PE) algorithm has been investigated intensively, block length effect impacts system performance and leads to the increasing of hardware complexity. In this paper, a flexible DA-ML algorithm is proposed in hybrid QPSK/OOK coherent optical wavelength division multiplexed (WDM) systems. We present a general cross phase modulation (XPM) model based on Volterra series transfer function (VSTF) method to describe XPM effects induced by OOK channels at the end of dispersion management (DM) fiber links. Based on our model, the weighted factors obtained from maximum likelihood method are introduced to eliminate the block length effect. We derive the analytical expression of phase error variance for the performance prediction of coherent receiver with the flexible DA-ML algorithm. Bit error ratio (BER) performance is evaluated and compared through both theoretical derivation and Monte Carlo (MC) simulation. The results show that our flexible DA-ML algorithm has significant improvement in performance compared with the conventional DA-ML algorithm as block length is a fixed value. Compared with the conventional DA-ML with optimum block length, our flexible DA-ML can obtain better system performance. It means our flexible DA-ML algorithm is more effective for mitigating phase noise than conventional DA-ML algorithm.

  20. General second-order covariance of Gaussian maximum likelihood estimates applied to passive source localization in fluctuating waveguides.

    PubMed

    Bertsatos, Ioannis; Zanolin, Michele; Ratilal, Purnima; Chen, Tianrun; Makris, Nicholas C

    2010-11-01

    A method is provided for determining necessary conditions on sample size or signal to noise ratio (SNR) to obtain accurate parameter estimates from remote sensing measurements in fluctuating environments. These conditions are derived by expanding the bias and covariance of maximum likelihood estimates (MLEs) in inverse orders of sample size or SNR, where the first-order covariance term is the Cramer-Rao lower bound (CRLB). Necessary sample sizes or SNRs are determined by requiring that (i) the first-order bias and the second-order covariance are much smaller than the true parameter value and the CRLB, respectively, and (ii) the CRLB falls within desired error thresholds. An analytical expression is provided for the second-order covariance of MLEs obtained from general complex Gaussian data vectors, which can be used in many practical problems since (i) data distributions can often be assumed to be Gaussian by virtue of the central limit theorem, and (ii) it allows for both the mean and variance of the measurement to be functions of the estimation parameters. Here, conditions are derived to obtain accurate source localization estimates in a fluctuating ocean waveguide containing random internal waves, and the consequences of the loss of coherence on their accuracy are quantified.

  1. A real-time signal combining system for Ka-band feed arrays using maximum-likelihood weight estimates

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1990-01-01

    A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.

  2. Evaluation of Bayesian source estimation methods with Prairie Grass observations and Gaussian plume model: A comparison of likelihood functions and distance measures

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Huang, Hong; Huang, Lida; Ristic, Branko

    2017-03-01

    Source term estimation for atmospheric dispersion deals with estimation of the emission strength and location of an emitting source using all available information, including site description, meteorological data, concentration observations and prior information. In this paper, Bayesian methods for source term estimation are evaluated using Prairie Grass field observations. The methods include those that require the specification of the likelihood function and those which are likelihood free, also known as approximate Bayesian computation (ABC) methods. The performances of five different likelihood functions in the former and six different distance measures in the latter case are compared for each component of the source parameter vector based on Nemenyi test over all the 68 data sets available in the Prairie Grass field experiment. Several likelihood functions and distance measures are introduced to source term estimation for the first time. Also, ABC method is improved in many aspects. Results show that discrepancy measures which refer to likelihood functions and distance measures collectively have significant influence on source estimation. There is no single winning algorithm, but these methods can be used collectively to provide more robust estimates.

  3. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  4. Likelihood-based genetic mark-recapture estimates when genotype samples are incomplete and contain typing errors.

    PubMed

    Macbeth, Gilbert M; Broderick, Damien; Ovenden, Jennifer R; Buckworth, Rik C

    2011-11-01

    Genotypes produced from samples collected non-invasively in harsh field conditions often lack the full complement of data from the selected microsatellite loci. The application to genetic mark-recapture methodology in wildlife species can therefore be prone to misidentifications leading to both 'true non-recaptures' being falsely accepted as recaptures (Type I errors) and 'true recaptures' being undetected (Type II errors). Here we present a new likelihood method that allows every pairwise genotype comparison to be evaluated independently. We apply this method to determine the total number of recaptures by estimating and optimising the balance between Type I errors and Type II errors. We show through simulation that the standard error of recapture estimates can be minimised through our algorithms. Interestingly, the precision of our recapture estimates actually improved when we included individuals with missing genotypes, as this increased the number of pairwise comparisons potentially uncovering more recaptures. Simulations suggest that the method is tolerant to per locus error rates of up to 5% per locus and can theoretically work in datasets with as little as 60% of loci genotyped. Our methods can be implemented in datasets where standard mismatch analyses fail to distinguish recaptures. Finally, we show that by assigning a low Type I error rate to our matching algorithms we can generate a dataset of individuals of known capture histories that is suitable for the downstream analysis with traditional mark-recapture methods.

  5. Developing New Rainfall Estimates to Identify the Likelihood of Agricultural Drought in Mesoamerica

    NASA Astrophysics Data System (ADS)

    Pedreros, D. H.; Funk, C. C.; Husak, G. J.; Michaelsen, J.; Peterson, P.; Lasndsfeld, M.; Rowland, J.; Aguilar, L.; Rodriguez, M.

    2012-12-01

    The population in Central America was estimated at ~40 million people in 2009, with 65% in rural areas directly relying on local agricultural production for subsistence, and additional urban populations relying on regional production. Mapping rainfall patterns and values in Central America is a complex task due to the rough topography and the influence of two oceans on either side of this narrow land mass. Characterization of precipitation amounts both in time and space is of great importance for monitoring agricultural food production for food security analysis. With the goal of developing reliable rainfall fields, the Famine Early warning Systems Network (FEWS NET) has compiled a dense set of historical rainfall stations for Central America through cooperation with meteorological services and global databases. The station database covers the years 1900-present with the highest density between 1970-2011. Interpolating station data by themselves does not provide a reliable result because it ignores topographical influences which dominate the region. To account for this, climatological rainfall fields were used to support the interpolation of the station data using a modified Inverse Distance Weighting process. By blending the station data with the climatological fields, a historical rainfall database was compiled for 1970-2011 at a 5km resolution for every five day interval. This new database opens the door to analysis such as the impact of sea surface temperature on rainfall patterns, changes to the typical dry spell during the rainy season, characterization of drought frequency and rainfall trends, among others. This study uses the historical database to identify the frequency of agricultural drought in the region and explores possible changes in precipitation patterns during the past 40 years. A threshold of 500mm of rainfall during the growing season was used to define agricultural drought for maize. This threshold was selected based on assessments of crop

  6. Estimating Amazonian rainforest stability and the likelihood for large-scale forest dieback

    NASA Astrophysics Data System (ADS)

    Rammig, Anja; Thonicke, Kirsten; Jupp, Tim; Ostberg, Sebastian; Heinke, Jens; Lucht, Wolfgang; Cramer, Wolfgang; Cox, Peter

    2010-05-01

    Annually, tropical forests process approximately 18 Pg of carbon through respiration and photosynthesis - more than twice the rate of anthropogenic fossil fuel emissions. Current climate change may be transforming this carbon sink into a carbon source by changing forest structure and dynamics. Increasing temperatures and potentially decreasing precipitation and thus prolonged drought stress may lead to increasing physiological stress and reduced productivity for trees. Resulting decreases in evapotranspiration and therefore convective precipitation could further accelerate drought conditions and destabilize the tropical ecosystem as a whole and lead to an 'Amazon forest dieback'. The projected direction and intensity of climate change vary widely within the region and between different scenarios from climate models (GCMs). In the scope of a World Bank-funded study, we assessed the 24 General Circulation Models (GCMs) evaluated in the 4th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC-AR4) with respect to their capability to reproduce present-day climate in the Amazon basin using a Bayesian approach. With this approach, greater weight is assigned to the models that simulate well the annual cycle of rainfall. We then use the resulting weightings to create probability density functions (PDFs) for future forest biomass changes as simulated by the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJmL) to estimate the risk of potential Amazon rainforest dieback. Our results show contrasting changes in forest biomass throughout five regions of northern South America: If photosynthetic capacity and water use efficiency is enhanced by CO2, biomass increases across all five regions. However, if CO2-fertilisation is assumed to be absent or less important, then substantial dieback occurs in some scenarios and thus, the risk of forest dieback is considerably higher. Particularly affected are regions in the central Amazon basin. The range of

  7. Cultivation and counter cultivation: does religiosity shape the relationship between television viewing and estimates of crime prevalence and assessment of victimization likelihood?

    PubMed

    Hetsroni, Amir; Lowenstein, Hila

    2013-02-01

    Religiosity may change the direction of the effect of TV viewing on assessment of the likelihood of personal victimization and estimates concerning crime prevalence. A content analysis of a representative sample of TV programming (56 hours of prime-time shows) was done to identify the most common crimes on television, followed by a survey of a representative sample of the adult public in a large urban district (778 respondents) who were asked to estimate the prevalence of these crimes and to assess the likelihood of themselves being victimized. People who defined themselves as non-religious increased their estimates of prevalence for crimes often depicted on TV, as they reported more time watching TV (ordinary cultivation effect), whereas estimates regarding the prevalence of crime and assessment of victimization likelihood among religious respondents were lower with reports of more time devoted to television viewing (counter-cultivation effect).

  8. Modeling short duration extreme precipitation patterns using copula and generalized maximum pseudo-likelihood estimation with censoring

    NASA Astrophysics Data System (ADS)

    Bargaoui, Zoubeida Kebaili; Bardossy, Andràs

    2015-10-01

    The paper aims to develop researches on the spatial variability of heavy rainfall events estimation using spatial copula analysis. To demonstrate the methodology, short time resolution rainfall time series from Stuttgart region are analyzed. They are constituted by rainfall observations on continuous 30 min time scale recorded over a network composed by 17 raingages for the period July 1989-July 2004. The analysis is performed aggregating the observations from 30 min up to 24 h. Two parametric bivariate extreme copula models, the Husler-Reiss model and the Gumbel model are investigated. Both involve a single parameter to be estimated. Thus, model fitting is operated for every pair of stations for a giving time resolution. A rainfall threshold value representing a fixed rainfall quantile is adopted for model inference. Generalized maximum pseudo-likelihood estimation is adopted with censoring by analogy with methods of univariate estimation combining historical and paleoflood information with systematic data. Only pairs of observations greater than the threshold are assumed as systematic data. Using the estimated copula parameter, a synthetic copula field is randomly generated and helps evaluating model adequacy which is achieved using Kolmogorov Smirnov distance test. In order to assess dependence or independence in the upper tail, the extremal coefficient which characterises the tail of the joint bivariate distribution is adopted. Hence, the extremal coefficient is reported as a function of the interdistance between stations. If it is less than 1.7, stations are interpreted as dependent in the extremes. The analysis of the fitted extremal coefficients with respect to stations inter distance highlights two regimes with different dependence structures: a short spatial extent regime linked to short duration intervals (from 30 min to 6 h) with an extent of about 8 km and a large spatial extent regime related to longer rainfall intervals (from 12 h to 24 h) with an

  9. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    PubMed

    Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin

    2013-01-01

    Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  10. Constrained Maximum Likelihood Estimation for Model Calibration Using Summary-level Information from External Big Data Sources.

    PubMed

    Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J

    2016-03-01

    Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an "internal" study while utilizing summary-level information, such as information on parameters for reduced models, from an "external" big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature.

  11. Constrained Maximum Likelihood Estimation for Model Calibration Using Summary-level Information from External Big Data Sources

    PubMed Central

    Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J.

    2016-01-01

    Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an “internal” study while utilizing summary-level information, such as information on parameters for reduced models, from an “external” big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature. PMID:27570323

  12. Decision-aided maximum likelihood phase estimation with optimum block length in hybrid QPSK/16QAM coherent optical WDM systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Yulong

    2016-01-01

    We propose a general model to entirely describe XPM effects induced by 16QAM channels in hybrid QPSK/16QAM wavelength division multiplexed (WDM) systems. A power spectral density (PSD) formula is presented to predict the statistical properties of XPM effects at the end of dispersion management (DM) fiber links. We derive the analytical expression of phase error variance for optimizing block length of QPSK channel coherent receiver with decision-aided (DA) maximum-likelihood (ML) phase estimation (PE). With our theoretical analysis, the optimum block length can be employed to improve the performance of coherent receiver. Bit error rate (BER) performance in QPSK channel is evaluated and compared through both theoretical derivation and Monte Carlo simulation. The results show that by using the DA-ML with optimum block length, bit signal-to-noise ratio (SNR) improvement over DA-ML with fixed block length of 10, 20 and 40 at BER of 10-3 is 0.18 dB, 0.46 dB and 0.65 dB, respectively, when in-line residual dispersion is 0 ps/nm.

  13. ROC (Receiver Operating Characteristics) study of maximum likelihood estimator human brain image reconstructions in PET (Positron Emission Tomography) clinical practice

    SciTech Connect

    Llacer, J.; Veklerov, E.; Nolan, D. ); Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J. )

    1990-10-01

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of {sup 18}F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab.

  14. The rate test of speciation: estimating the likelihood of non-allopatric speciation from reproductive isolation rates in Drosophila.

    PubMed

    Yukilevich, Roman

    2014-04-01

    Among the most debated subjects in speciation is the question of its mode. Although allopatric (geographical) speciation is assumed the null model, the importance of parapatric and sympatric speciation is extremely difficult to assess and remains controversial. Here I develop a novel approach to distinguish these modes of speciation by studying the evolution of reproductive isolation (RI) among taxa. I focus on the Drosophila genus, for which measures of RI are known. First, I incorporate RI into age-range correlations. Plots show that almost all cases of weak RI are between allopatric taxa whereas sympatric taxa have strong RI. This either implies that most reproductive isolation (RI) was initiated in allopatry or that RI evolves too rapidly in sympatry to be captured at incipient stages. To distinguish between these explanations, I develop a new "rate test of speciation" that estimates the likelihood of non-allopatric speciation given the distribution of RI rates in allopatry versus sympatry. Most sympatric taxa were found to have likely initiated RI in allopatry. However, two putative candidate species pairs for non-allopatric speciation were identified (5% of known Drosophila). In total, this study shows how using RI measures can greatly inform us about the geographical mode of speciation in nature.

  15. Further Evaluation of Covariate Analysis using Empirical Bayes Estimates in Population Pharmacokinetics: the Perception of Shrinkage and Likelihood Ratio Test.

    PubMed

    Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose

    2017-01-01

    Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.

  16. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    PubMed Central

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  17. Separating components of variation in measurement series using maximum likelihood estimation. Application to patient position data in radiotherapy

    NASA Astrophysics Data System (ADS)

    Sage, J. P.; Mayles, W. P. M.; Mayles, H. M.; Syndikus, I.

    2014-10-01

    Maximum likelihood estimation (MLE) is presented as a statistical tool to evaluate the contribution of measurement error to any measurement series where the same quantity is measured using different independent methods. The technique was tested against artificial data sets; generated for values of underlying variation in the quantity and measurement error between 0.5 mm and 3 mm. In each case the simulation parameters were determined within 0.1 mm. The technique was applied to analyzing external random positioning errors from positional audit data for 112 pelvic radiotherapy patients. Patient position offsets were measured using portal imaging analysis and external body surface measures. Using MLE to analyze all methods in parallel it was possible to ascertain the measurement error for each method and the underlying positional variation. In the (AP / Lat / SI) directions the standard deviations of the measured patient position errors from portal imaging were (3.3 mm / 2.3 mm / 1.9 mm), arising from underlying variations of (2.7 mm / 1.5 mm / 1.4 mm) and measurement uncertainties of (1.8 mm / 1.8 mm / 1.3 mm), respectively. The measurement errors agree well with published studies. MLE used in this manner could be applied to any study in which the same quantity is measured using independent methods.

  18. Mapping grey matter reductions in schizophrenia: an anatomical likelihood estimation analysis of voxel-based morphometry studies.

    PubMed

    Fornito, A; Yücel, M; Patti, J; Wood, S J; Pantelis, C

    2009-03-01

    Voxel-based morphometry (VBM) is a popular tool for mapping neuroanatomical changes in schizophrenia patients. Several recent meta-analyses have identified the brain regions in which patients most consistently show grey matter reductions, although they have not examined whether such changes reflect differences in grey matter concentration (GMC) or grey matter volume (GMV). These measures assess different aspects of grey matter integrity, and may therefore reflect different pathological processes. In this study, we used the Anatomical Likelihood Estimation procedure to analyse significant differences reported in 37 VBM studies of schizophrenia patients, incorporating data from 1646 patients and 1690 controls, and compared the findings of studies using either GMC or GMV to index grey matter differences. Analysis of all studies combined indicated that grey matter reductions in a network of frontal, temporal, thalamic and striatal regions are among the most frequently reported in literature. GMC reductions were generally larger and more consistent than GMV reductions, and were more frequent in the insula, medial prefrontal, medial temporal and striatal regions. GMV reductions were more frequent in dorso-medial frontal cortex, and lateral and orbital frontal areas. These findings support the primacy of frontal, limbic, and subcortical dysfunction in the pathophysiology of schizophrenia, and suggest that the grey matter changes observed with MRI may not necessarily result from a unitary pathological process.

  19. MLE (Maximum Likelihood Estimator) reconstruction of a brain phantom using a Monte Carlo transition matrix and a statistical stopping rule

    SciTech Connect

    Veklerov, E.; Llacer, J.; Hoffman, E.J.

    1987-10-01

    In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data.

  20. Bayesian Monte Carlo and maximum likelihood approach for uncertainty estimation and risk management: Application to lake oxygen recovery model.

    PubMed

    Chaudhary, Abhishek; Hantush, Mohamed M

    2017-01-01

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficient for oxygen and wind speed and associated 95% confidence band, which are shown to be consistent with reported measured values at five different lakes. Finally, we illustrate the robustness of the BMCML to solve risk-based water quality management problems, showing that neglecting cross-correlations between parameters could lead to improper required BOD load reduction to achieve the compliance criteria of 5 mg/L.

  1. Beyond Roughness: Maximum-Likelihood Estimation of Topographic "Structure" on Venus and Elsewhere in the Solar System

    NASA Astrophysics Data System (ADS)

    Simons, F. J.; Eggers, G. L.; Lewis, K. W.; Olhede, S. C.

    2015-12-01

    What numbers "capture" topography? If stationary, white, and Gaussian: mean and variance. But "whiteness" is strong; we are led to a "baseline" over which to compute means and variances. We then have subscribed to topography as a correlated process, and to the estimation (noisy, afftected by edge effects) of the parameters of a spatial or spectral covariance function. What if the covariance function or the point process itself aren't Gaussian? What if the region under study isn't regularly shaped or sampled? How can results from differently sized patches be compared robustly? We present a spectral-domain "Whittle" maximum-likelihood procedure that circumvents these difficulties and answers the above questions. The key is the Matern form, whose parameters (variance, range, differentiability) define the shape of the covariance function (Gaussian, exponential, ..., are all special cases). We treat edge effects in simulation and in estimation. Data tapering allows for the irregular regions. We determine the estimation variance of all parameters. And the "best" estimate may not be "good enough": we test whether the "model" itself warrants rejection. We illustrate our methodology on geologically mapped patches of Venus. Surprisingly few numbers capture planetary topography. We derive them, with uncertainty bounds, we simulate "new" realizations of patches that look to the geologists exactly as if they were derived from similar processes. Our approach holds in 1, 2, and 3 spatial dimensions, and generalizes to multiple variables, e.g. when topography and gravity are being considered jointly (perhaps linked by flexural rigidity, erosion, or other surface and sub-surface modifying processes). Our results have widespread implications for the study of planetary topography in the Solar System, and are interpreted in the light of trying to derive "process" from "parameters", the end goal to assign likely formation histories for the patches under consideration. Our results

  2. An Approximation for the Bias Function of the Maximum Likelihood Estimate of a Latent Variable for the General Case Where the Item Responses Are Discrete.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1993-01-01

    An approximation for the bias function of the maximum likelihood estimate of the latent trait or ability is developed for the general case where item responses are discrete, which includes the dichotomous response level, the graded response level, and the nominal response level. (SLD)

  3. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast.

    PubMed

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M

    2016-04-21

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  4. Voxelwise meta-analysis of gray matter anomalies in progressive supranuclear palsy and Parkinson's disease using anatomic likelihood estimation

    PubMed Central

    Shao, Na; Yang, Jing; Li, Jianpeng; Shang, Hui-Fang

    2014-01-01

    Numerous voxel-based morphometry (VBM) studies on gray matter (GM) of patients with progressive supranuclear palsy (PSP) and Parkinson's disease (PD) have been conducted separately. Identifying the different neuroanatomical changes in GM resulting from PSP and PD through meta-analysis will aid the differential diagnosis of PSP and PD. In this study, a systematic review of VBM studies of patients with PSP and PD relative to healthy control (HC) in the Embase and PubMed databases from January 1995 to April 2013 was conducted. The anatomical distribution of the coordinates of GM differences was meta-analyzed using anatomical likelihood estimation. Separate maps of GM changes were constructed and subtraction meta-analysis was performed to explore the differences in GM abnormalities between PSP and PD. Nine PSP studies and 24 PD studies were included. GM reductions were present in the bilateral thalamus, basal ganglia, midbrain, insular cortex and inferior frontal gyrus, and left precentral gyrus and anterior cingulate gyrus in PSP. Atrophy of GM was concentrated in the bilateral middle and inferior frontal gyrus, precuneus, left precentral gyrus, middle temporal gyrus, right superior parietal lobule, and right cuneus in PD. Subtraction meta-analysis indicated that GM volume was lesser in the bilateral midbrain, thalamus, and insula in PSP compared with that in PD. Our meta-analysis indicated that PSP and PD shared a similar distribution of neuroanatomical changes in the frontal lobe, including inferior frontal gyrus and precentral gyrus, and that atrophy of the midbrain, thalamus, and insula are neuroanatomical markers for differentiating PSP from PD. PMID:24600372

  5. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast

    NASA Astrophysics Data System (ADS)

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.

    2016-04-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  6. Brain Correlates of Cognitive Remediation in Schizophrenia: Activation Likelihood Analysis Shows Preliminary Evidence of Neural Target Engagement

    PubMed Central

    Ramsay, Ian S.; MacDonald, Angus W.

    2015-01-01

    Cognitive remediation training (CRT) for schizophrenia has been found to improve cognitive functioning and influence neural plasticity. However, with various training approaches and mixed findings, the mechanisms driving generalization of cognitive skills from CRT are unclear. In this meta-analysis of extant imaging studies examining CRT’s effects, we sought to clarify whether varying approaches to CRT suggest common neural changes and whether such mechanisms are restorative or compensatory. We conducted a literature search to identify studies appropriate for inclusion in an activation likelihood estimation (ALE) meta-analysis. Our criteria required studies to consist of training-based interventions designed to improve patients’ cognitive or social functioning, including generalization to untrained circumstances. Studies were also required to examine changes in pre- vs posttraining functional activation using functional magnetic resonance imaging or positron emission tomography. The literature search identified 162 articles, 9 of which were appropriate for inclusion. ALE analyses comparing pre- and posttraining brain activation showed increased activity in the lateral and medial prefrontal cortex (PFC), parietal cortex, insula, and the caudate and thalamus. Notably, activation associated with CRT in the left PFC and thalamus partially overlapped with previous meta-analytically identified areas associated with deficits in working memory, executive control, and facial emotion processing in schizophrenia. We conclude that CRT interventions from varying theoretic modalities elicit plasticity in areas that support cognitive and socioemotional processes in this early set of studies. While preliminary, these changes appear to be both restorative and compensatory, though thalamocortical areas previously associated with dysfunction may be common sources of plasticity for cognitive remediation in schizophrenia. PMID:25800249

  7. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  8. A real-time digital program for estimating aircraft stability and control parameters from flight test data by using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Mayhew, S. C.

    1973-01-01

    A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.

  9. Simulation for position determination of distal and proximal edges for SOBP irradiation in hadron therapy by using the maximum likelihood estimation method

    NASA Astrophysics Data System (ADS)

    Inaniwa, Taku; Kohno, Toshiyuki; Tomitani, Takehiro

    2005-12-01

    In radiation therapy with hadron beams, conformal irradiation to a tumour can be achieved by using the properties of incident ions such as the high dose concentration around the Bragg peak. For the effective utilization of such properties, it is necessary to evaluate the volume irradiated with hadron beams and the deposited dose distribution in a patient's body. Several methods have been proposed for this purpose, one of which uses the positron emitters generated through fragmentation reactions between incident ions and target nuclei. In the previous paper, we showed that the maximum likelihood estimation (MLE) method could be applicable to the estimation of beam end-point from the measured positron emitting activity distribution for mono-energetic beam irradiations. In a practical treatment, a spread-out Bragg peak (SOBP) beam is used to achieve a uniform biological dose distribution in the whole target volume. Therefore, in the present paper, we proposed to extend the MLE method to estimations of the position of the distal and proximal edges of the SOBP from the detected annihilation gamma ray distribution. We confirmed the effectiveness of the method by means of simulations. Although polyethylene was adopted as a substitute for a soft tissue target in validating the method, the proposed method is equally applicable to general cases, provided that the reaction cross sections between the incident ions and the target nuclei are known. The relative advantage of incident beam species to determine the position of the distal and the proximal edges was compared. Furthermore, we ascertained the validity of applying the MLE method to determinations of the position of the distal and the proximal edges of an SOBP by simulations and we gave a physical explanation of the distal and the proximal information.

  10. Reducing the likelihood of future human activities that could affect geologic high-level waste repositories

    SciTech Connect

    Not Available

    1984-05-01

    The disposal of radioactive wastes in deep geologic formations provides a means of isolating the waste from people until the radioactivity has decayed to safe levels. However, isolating people from the wastes is a different problem, since we do not know what the future condition of society will be. The Human Interference Task Force was convened by the US Department of Energy to determine whether reasonable means exist (or could be developed) to reduce the likelihood of future human unintentionally intruding on radioactive waste isolation systems. The task force concluded that significant reductions in the likelihood of human interference could be achieved, for perhaps thousands of years into the future, if appropriate steps are taken to communicate the existence of the repository. Consequently, for two years the task force directed most of its study toward the area of long-term communication. Methods are discussed for achieving long-term communication by using permanent markers and widely disseminated records, with various steps taken to provide multiple levels of protection against loss, destruction, and major language/societal changes. Also developed is the concept of a universal symbol to denote Caution - Biohazardous Waste Buried Here. If used for the thousands of non-radioactive biohazardous waste sites in this country alone, a symbol could transcend generations and language changes, thereby vastly improving the likelihood of successful isolation of all buried biohazardous wastes.

  11. Partial likelihood estimation of IRT models with censored lifetime data: an application to mental disorders in the ESEMeD surveys.

    PubMed

    Forero, Carlos G; Almansa, Josué; Adroher, Núria D; Vermunt, Jeroen K; Vilagut, Gemma; De Graaf, Ron; Haro, Josep-Maria; Alonso Caballero, Jordi

    2014-07-01

    Developmental studies of mental disorders based on epidemiological data are often based on cross-sectional retrospective surveys. Under such designs, observations are right-censored, causing underestimation of lifetime prevalences and correlations, and inducing bias in latent trait models on the observations. In this paper we propose a Partial Likelihood (PL) method to estimate unbiased IRT models of lifetime predisposition to develop a certain outcome. A two-step estimation procedure corrects the IRT likelihood of outcome appearance with a function depending on (a) projected outcome frequencies at the end of the risk period, and (b) outcome censoring status at the time of the observation. Simulation results showed that the PL method yielded good recovery of true frequencies and intercepts. Slopes were best estimated when events were sufficiently correlated. When PL is applied to lifetime mental health disorders (assessed in the ESEMeD project surveys), estimated univariate prevalences were, on average, 1.4 times above raw estimates, and 2.06 higher in the case of bivariate prevalences.

  12. Efficient Parameter Estimation of Generalizable Coarse-Grained Protein Force Fields Using Contrastive Divergence: A Maximum Likelihood Approach

    PubMed Central

    2013-01-01

    Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/. PMID:24683370

  13. Neuroanatomical substrates of action perception and understanding: an anatomic likelihood estimation meta-analysis of lesion-symptom mapping studies in brain injured patients.

    PubMed

    Urgesi, Cosimo; Candidi, Matteo; Avenanti, Alessio

    2014-01-01

    Several neurophysiologic and neuroimaging studies suggested that motor and perceptual systems are tightly linked along a continuum rather than providing segregated mechanisms supporting different functions. Using correlational approaches, these studies demonstrated that action observation activates not only visual but also motor brain regions. On the other hand, brain stimulation and brain lesion evidence allows tackling the critical question of whether our action representations are necessary to perceive and understand others' actions. In particular, recent neuropsychological studies have shown that patients with temporal, parietal, and frontal lesions exhibit a number of possible deficits in the visual perception and the understanding of others' actions. The specific anatomical substrates of such neuropsychological deficits however, are still a matter of debate. Here we review the existing literature on this issue and perform an anatomic likelihood estimation meta-analysis of studies using lesion-symptom mapping methods on the causal relation between brain lesions and non-linguistic action perception and understanding deficits. The meta-analysis encompassed data from 361 patients tested in 11 studies and identified regions in the inferior frontal cortex, the inferior parietal cortex and the middle/superior temporal cortex, whose damage is consistently associated with poor performance in action perception and understanding tasks across studies. Interestingly, these areas correspond to the three nodes of the action observation network that are strongly activated in response to visual action perception in neuroimaging research and that have been targeted in previous brain stimulation studies. Thus, brain lesion mapping research provides converging causal evidence that premotor, parietal and temporal regions play a crucial role in action recognition and understanding.

  14. Neuroanatomical substrates of action perception and understanding: an anatomic likelihood estimation meta-analysis of lesion-symptom mapping studies in brain injured patients

    PubMed Central

    Urgesi, Cosimo; Candidi, Matteo; Avenanti, Alessio

    2014-01-01

    Several neurophysiologic and neuroimaging studies suggested that motor and perceptual systems are tightly linked along a continuum rather than providing segregated mechanisms supporting different functions. Using correlational approaches, these studies demonstrated that action observation activates not only visual but also motor brain regions. On the other hand, brain stimulation and brain lesion evidence allows tackling the critical question of whether our action representations are necessary to perceive and understand others’ actions. In particular, recent neuropsychological studies have shown that patients with temporal, parietal, and frontal lesions exhibit a number of possible deficits in the visual perception and the understanding of others’ actions. The specific anatomical substrates of such neuropsychological deficits however, are still a matter of debate. Here we review the existing literature on this issue and perform an anatomic likelihood estimation meta-analysis of studies using lesion-symptom mapping methods on the causal relation between brain lesions and non-linguistic action perception and understanding deficits. The meta-analysis encompassed data from 361 patients tested in 11 studies and identified regions in the inferior frontal cortex, the inferior parietal cortex and the middle/superior temporal cortex, whose damage is consistently associated with poor performance in action perception and understanding tasks across studies. Interestingly, these areas correspond to the three nodes of the action observation network that are strongly activated in response to visual action perception in neuroimaging research and that have been targeted in previous brain stimulation studies. Thus, brain lesion mapping research provides converging causal evidence that premotor, parietal and temporal regions play a crucial role in action recognition and understanding. PMID:24910603

  15. Application of maximum likelihood estimator in nano-scale optical path length measurement using spectral-domain optical coherence phase microscopy

    PubMed Central

    Motaghian Nezam, S. M. R.; Joo, C; Tearney, G. J.; de Boer, J. F.

    2009-01-01

    Spectral-domain optical coherence phase microscopy (SD-OCPM) measures minute phase changes in transparent biological specimens using a common path interferometer and a spectrometer based optical coherence tomography system. The Fourier transform of the acquired interference spectrum in spectral-domain optical coherence tomography (SD-OCT) is complex and the phase is affected by contributions from inherent random noise. To reduce this phase noise, knowledge of the probability density function (PDF) of data becomes essential. In the present work, the intensity and phase PDFs of the complex interference signal are theoretically derived and the optical path length (OPL) PDF is experimentally validated. The full knowledge of the PDFs is exploited for optimal estimation (Maximum Likelihood estimation) of the intensity, phase, and signal-to-noise ratio (SNR) in SD-OCPM. Maximum likelihood (ML) estimates of the intensity, SNR, and OPL images are presented for two different scan modes using Bovine Pulmonary Artery Endothelial (BPAE) cells. To investigate the phase accuracy of SD-OCPM, we experimentally calculate and compare the cumulative distribution functions (CDFs) of the OPL standard deviation and the square root of the Cramér-Rao lower bound (1/2SNR) over 100 BPAE images for two different scan modes. The correction to the OPL measurement by applying ML estimation to SD-OCPM for BPAE cells is demonstrated. PMID:18957999

  16. Development of an integrated genetic map of a sugarcane (Saccharum spp.) commercial cross, based on a maximum-likelihood approach for estimation of linkage and linkage phases.

    PubMed

    Garcia, A A F; Kido, E A; Meza, A N; Souza, H M B; Pinto, L R; Pastina, M M; Leite, C S; Silva, J A G da; Ulian, E C; Figueira, A; Souza, A P

    2006-01-01

    Sugarcane (Saccharum spp.) is a clonally propagated outcrossing polyploid crop of great importance in tropical agriculture. Up to now, all sugarcane genetic maps had been developed using either full-sib progenies derived from interspecific crosses or from selfing, both approaches not directly adopted in conventional breeding. We have developed a single integrated genetic map using a population derived from a cross between two pre-commercial cultivars ('SP80-180' x 'SP80-4966') using a novel approach based on the simultaneous maximum-likelihood estimation of linkage and linkage phases method specially designed for outcrossing species. From a total of 1,118 single-dose markers (RFLP, SSR and AFLP) identified, 39% derived from a testcross configuration between the parents segregating in a 1:1 fashion, while 61% segregated 3:1, representing heterozygous markers in both parents with the same genotypes. The markers segregating 3:1 were used to establish linkage between the testcross markers. The final map comprised of 357 linked markers, including 57 RFLPs, 64 SSRs and 236 AFLPs that were assigned to 131 co-segregation groups, considering a LOD score of 5, and a recombination fraction of 37.5 cM with map distances estimated by Kosambi function. The co-segregation groups represented a total map length of 2,602.4 cM, with a marker density of 7.3 cM. When the same data were analyzed using JoinMap software, only 217 linked markers were assigned to 98 co-segregation groups, spanning 1,340 cM, with a marker density of 6.2 cM. The maximum-likelihood approach reduced the number of unlinked markers to 761 (68.0%), compared to 901 (80.5%) using JoinMap. All the co-segregation groups obtained using JoinMap were present in the map constructed based on the maximum-likelihood method. Differences on the marker order within the co-segregation groups were observed between the two maps. Based on RFLP and SSR markers, 42 of the 131 co-segregation groups were assembled into 12 putative

  17. Aerodynamic derivatives for an oblique wing aircraft estimated from flight data by using a maximum likelihood technique

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1978-01-01

    There are several practical problems in using current techniques with five degree of freedom equations to estimate the stability and control derivatives of oblique wing aircraft from flight data. A technique was developed to estimate these derivatives by separating the analysis of the longitudinal and lateral directional motion without neglecting cross coupling effects. Although previously applied to symmetrical aircraft, the technique was not expected to be adequate for oblique wing vehicles. The application of the technique to flight data from a remotely piloted oblique wing aircraft is described. The aircraft instrumentation and data processing were reviewed, with particular emphasis on the digital filtering of the data. A complete set of flight determined stability and control derivative estimates is presented and compared with predictions. The results demonstrated that the relatively simple approach developed was adequate to obtain high quality estimates of the aerodynamic derivatives of such aircraft.

  18. An Estimate of the Likelihood for a Climatically Significant Volcanic Eruption Within the Present Decade (2000-2009)

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Franklin, M. Rose (Technical Monitor)

    2000-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (i.e., those having a volcanic explosivity index, or VEI, equal to 4 or larger) per decade is found to span 2-11, with 96% located in the tropics and extra-tropical Northern Hemisphere, A two-point moving average of the time series has higher values since the 1860s than before, measuring 8.00 in the 1910s (the highest value) and measuring 6.50 in the 1980s, the highest since the 18 1 0s' peak. On the basis of the usual behavior of the first difference of the two-point moving averages, one infers that the two-point moving average for the 1990s will measure about 6.50 +/- 1.00, implying that about 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially, those having VEI equal to 5 or larger) nearly always have been associated with episodes of short-term global cooling, the occurrence of even one could ameliorate the effects of global warming. Poisson probability distributions reveal that the probability of one or more VEI equal to 4 or larger events occurring within the next ten years is >99%, while it is about 49% for VEI equal to 5 or larger events and 18% for VEI equal to 6 or larger events. Hence, the likelihood that a, climatically significant volcanic eruption will occur within the next 10 years appears reasonably high.

  19. User's guide: Nimbus-7 Earth radiation budget narrow-field-of-view products. Scene radiance tape products, sorting into angular bins products, and maximum likelihood cloud estimation products

    NASA Technical Reports Server (NTRS)

    Kyle, H. Lee; Hucek, Richard R.; Groveman, Brian; Frey, Richard

    1990-01-01

    The archived Earth radiation budget (ERB) products produced from the Nimbus-7 ERB narrow field-of-view scanner are described. The principal products are broadband outgoing longwave radiation (4.5 to 50 microns), reflected solar radiation (0.2 to 4.8 microns), and the net radiation. Daily and monthly averages are presented on a fixed global equal area (500 sq km), grid for the period May 1979 to May 1980. Two independent algorithms are used to estimate the outgoing fluxes from the observed radiances. The algorithms are described and the results compared. The products are divided into three subsets: the Scene Radiance Tapes (SRT) contain the calibrated radiances; the Sorting into Angular Bins (SAB) tape contains the SAB produced shortwave, longwave, and net radiation products; and the Maximum Likelihood Cloud Estimation (MLCE) tapes contain the MLCE products. The tape formats are described in detail.

  20. Joint penalized-likelihood reconstruction of time-activity curves and regions-of-interest from projection data in brain PET.

    PubMed

    Krestyannikov, E; Tohka, J; Ruotsalainen, U

    2008-06-07

    This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.

  1. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes.

  2. Technical note: Calculation of standard errors of estimates of genetic parameters with the multiple-trait derivative-free restricted maximal likelihood programs.

    PubMed

    Kachman, S D; Van Vleck, L D

    2007-10-01

    The multiple-trait derivative-free REML set of programs was written to handle partially missing data for multiple-trait analyses as well as single-trait models. Standard errors of genetic parameters were reported for univariate models and for multiple-trait analyses only when all traits were measured on animals with records. In addition to estimating (co)variance components for multiple-trait models with partially missing data, this paper shows how the multiple-trait derivative-free REML set of programs can also estimate SE by augmenting the data file when not all animals have all traits measured. Although the standard practice has been to eliminate records with partially missing data, that practice uses only a subset of the available data. In some situations, the elimination of partial records can result in elimination of all the records, such as one trait measured in one environment and a second trait measured in a different environment. An alternative approach requiring minor modifications of the original data and model was developed that provides estimates of the SE using an augmented data set that gives the same residual log likelihood as the original data for multiple-trait analyses when not all traits are measured. Because the same residual vector is used for the original data and the augmented data, the resulting REML estimators along with their sampling properties are identical for the original and augmented data, so that SE for estimates of genetic parameters can be calculated.

  3. All Physical Activity May Not Be Associated with a Lower Likelihood of Adolescent Smoking Uptake

    PubMed Central

    Audrain-McGovern, Janet; Rodriguez, Daniel

    2015-01-01

    Objective Research has documented that physical activity is associated with a lower risk of adolescent smoking uptake, yet it is unclear whether this relationship exists for all types of physical activity. We sought to determine whether certain types of physical activity are associated with a decreased or an increased risk of adolescent smoking uptake. Methods In this longitudinal cohort study, adolescents (n=1,356) were surveyed every six months for four years (age 14 – 18 years old). Smoking and physical activity were measured at each of the eight time-points. Physical activity that was negatively associated with smoking across the eight waves was considered positive physical activities (i.e., PPA; linked to not smoking such as racquet sports, running, and swimming laps). Physical activity that was positively associated with smoking across the eight waves were considered negative physical activities (i.e., NPA; linked to smoking such as skating, walking, bicycling, sport fighting, and competitive wrestling). Results Associative Processes Latent Growth Curve Modeling revealed that each 30-minute increase in NPA per week at baseline was associated with a 4-fold increased odds of smoking progression (OR=4.10, 95% CI=2.14, 7.83). By contrast, each 30-minute increase in PPA at baseline was associated with a 51% decrease in the odds of smoking progression (OR=.49, 95% CI=.25, .93). Conclusions The type of physical activity that an adolescent engages appears to be important for the uptake of cigarette smoking among adolescents. These associative relationships warrant consideration in interventions to increase overall physical activity and those promoting physical activity to prevent smoking uptake. PMID:26280377

  4. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  5. Maximum Likelihood Fusion Model

    DTIC Science & Technology

    2014-08-09

    data fusion, hypothesis testing,maximum likelihood estimation, mobile robot navigation REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT...61 vi 9 Bibliography 62 vii 10 LIST OF FIGURES 1.1 Illustration of mobile robotic agents. Land rovers such as (left) Pioneer robots ...simultaneous localization and mapping 1 15 Figure 1.1: Illustration of mobile robotic agents. Land rovers such as (left) Pioneer robots , (center) Segways

  6. Maximum-Likelihood Estimation for Frequency-Modulated Continuous-Wave Laser Ranging using Photon-Counting Detectors

    DTIC Science & Technology

    2013-03-21

    instruments where frequency estimates are calcu- lated from coherently detected fields, e.g., coherent Doppler LIDAR . Our CRB results reveal that the best...wave coherent lidar using an optical field correlation detection method,” Opt. Rev. 5, 310–314 (1998). 8. H. P. Yuen and V. W. S. Chan, “Noise in...2170–2180 (2007). 13. T. J. Karr, “Atmospheric phase error in coherent laser radar,” IEEE Trans. Antennas Propag. 55, 1122–1133 (2007). 14. Throughout

  7. Empirical aspects of the Whittle-based maximum likelihood method in jointly estimating seasonal and non-seasonal fractional integration parameters

    NASA Astrophysics Data System (ADS)

    Marques, G. O. L. C.

    2011-01-01

    This paper addresses the efficiency of the maximum likelihood ( ML) method in jointly estimating the fractional integration parameters ds and d, respectively associated with seasonal and non-seasonal long-memory components in discrete stochastic processes. The influence of the size of non-seasonal parameter over seasonal parameter estimation, and vice versa, was analyzed in the space d×ds∈(0,1)×(0,1) by using mean squared error statistics MSE(d) and MSE(dˆ). This study was based on Monte Carlo simulation experiments using the ML estimator with Whittle’s approximation in the frequency domain. Numerical results revealed that efficiency in jointly estimating each integration parameter is affected in different ways by their sizes: as ds and d increase simultaneously to 1, MSE(d) and MSE(dˆ) become larger; however, effects on MSE(d) are much stronger than the effects on MSE(dˆ). Moreover, as each parameter tends individually to 1, MSE(dˆ) becomes larger, but MSE(d) is barely influenced.

  8. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  9. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  10. Maximum likelihood estimate of life expectancy in the prehistoric Jomon: Canine pulp volume reduction suggests a longer life expectancy than previously thought.

    PubMed

    Sasaki, Tomohiko; Kondo, Osamu

    2016-09-01

    Recent theoretical progress potentially refutes past claims that paleodemographic estimations are flawed by statistical problems, including age mimicry and sample bias due to differential preservation. The life expectancy at age 15 of the Jomon period prehistoric populace in Japan was initially estimated to have been ∼16 years while a more recent analysis suggested 31.5 years. In this study, we provide alternative results based on a new methodology. The material comprises 234 mandibular canines from Jomon period skeletal remains and a reference sample of 363 mandibular canines of recent-modern Japanese. Dental pulp reduction is used as the age-indicator, which because of tooth durability is presumed to minimize the effect of differential preservation. Maximum likelihood estimation, which theoretically avoids age mimicry, was applied. Our methods also adjusted for the known pulp volume reduction rate among recent-modern Japanese to provide a better fit for observations in the Jomon period sample. Without adjustment for the known rate in pulp volume reduction, estimates of Jomon life expectancy at age 15 were dubiously long. However, when the rate was adjusted, the estimate results in a value that falls within the range of modern hunter-gatherers, with significantly better fit to the observations. The rate-adjusted result of 32.2 years more likely represents the true life expectancy of the Jomon people at age 15, than the result without adjustment. Considering ∼7% rate of antemortem loss of the mandibular canine observed in our Jomon period sample, actual life expectancy at age 15 may have been as high as ∼35.3 years.

  11. A gradient Markov chain Monte Carlo algorithm for computing multivariate maximum likelihood estimates and posterior distributions: mixture dose-response assessment.

    PubMed

    Li, Ruochen; Englehardt, James D; Li, Xiaoguang

    2012-02-01

    Multivariate probability distributions, such as may be used for mixture dose-response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose-response biomarker and genetic information. In this article, a new two-stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn-in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose-response function (DRF). Results are shown for the five-parameter common-mode and seven-parameter dissimilar-mode models, based on published data for eight benzene-toluene dose pairs. The common mode conditional DRF is obtained with a 21-fold reduction in data requirement versus MCMC. Example common-mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126-PCB 153 mixture. Applicability is analyzed and discussed. Matlab(®) computer programs are provided.

  12. Modeling the impact of hepatitis C viral clearance on end-stage liver disease in an HIV co-infected cohort with Targeted Maximum Likelihood Estimation

    PubMed Central

    Schnitzer, Mireille E; Moodie, Erica EM; van der Laan, Mark J; Platt, Robert W; Klein, Marina B

    2013-01-01

    Summary Despite modern effective HIV treatment, hepatitis C virus (HCV) co-infection is associated with a high risk of progression to end-stage liver disease (ESLD) which has emerged as the primary cause of death in this population. Clinical interest lies in determining the impact of clearance of HCV on risk for ESLD. In this case study, we examine whether HCV clearance affects risk of ESLD using data from the multicenter Canadian Co-infection Cohort Study. Complications in this survival analysis arise from the time-dependent nature of the data, the presence of baseline confounders, loss to follow-up, and confounders that change over time, all of which can obscure the causal effect of interest. Additional challenges included non-censoring variable missingness and event sparsity. In order to efficiently estimate the ESLD-free survival probabilities under a specific history of HCV clearance, we demonstrate the doubly-robust and semiparametric efficient method of Targeted Maximum Likelihood Estimation (TMLE). Marginal structural models (MSM) can be used to model the effect of viral clearance (expressed as a hazard ratio) on ESLD-free survival and we demonstrate a way to estimate the parameters of a logistic model for the hazard function with TMLE. We show the theoretical derivation of the efficient influence curves for the parameters of two different MSMs and how they can be used to produce variance approximations for parameter estimates. Finally, the data analysis evaluating the impact of HCV on ESLD was undertaken using multiple imputations to account for the non-monotone missing data. PMID:24571372

  13. Sub-200 ps CRT in monolithic scintillator PET detectors using digital SiPM arrays and maximum likelihood interaction time estimation.

    PubMed

    van Dam, Herman T; Borghi, Giacomo; Seifert, Stefan; Schaart, Dennis R

    2013-05-21

    Digital silicon photomultiplier (dSiPM) arrays have favorable characteristics for application in monolithic scintillator detectors for time-of-flight positron emission tomography (PET). To fully exploit these benefits, a maximum likelihood interaction time estimation (MLITE) method was developed to derive the time of interaction from the multiple time stamps obtained per scintillation event. MLITE was compared to several deterministic methods. Timing measurements were performed with monolithic scintillator detectors based on novel dSiPM arrays and LSO:Ce,0.2%Ca crystals of 16 × 16 × 10 mm(3), 16 × 16 × 20 mm(3), 24 × 24 × 10 mm(3), and 24 × 24 × 20 mm(3). The best coincidence resolving times (CRTs) for pairs of identical detectors were obtained with MLITE and measured 157 ps, 185 ps, 161 ps, and 184 ps full-width-at-half-maximum (FWHM), respectively. For comparison, a small reference detector, consisting of a 3 × 3 × 5 mm(3) LSO:Ce,0.2%Ca crystal coupled to a single pixel of a dSiPM array, was measured to have a CRT as low as 120 ps FWHM. The results of this work indicate that the influence of the optical transport of the scintillation photons on the timing performance of monolithic scintillator detectors can at least partially be corrected for by utilizing the information contained in the spatio-temporal distribution of the collection of time stamps registered per scintillation event.

  14. Bit-error rate performance of coherent optical M-ary PSK/QAM using decision-aided maximum likelihood phase estimation.

    PubMed

    Yu, Changyuan; Zhang, Shaoliang; Kam, Pooi Yuen; Chen, Jian

    2010-06-07

    The bit-error rate (BER) expressions of 16- phase-shift keying (PSK) and 16- quadrature amplitude modulation (QAM) are analytically obtained in the presence of a phase error. By averaging over the statistics of the phase error, the performance penalty can be analytically examined as a function of the phase error variance. The phase error variances leading to a 1-dB signal-to-noise ratio per bit penalty at BER=10(-4) have been found to be 8.7 x 10(-2) rad(2), 1.2 x 10(-2) rad(2), 2.4 x 10(-3) rad(2), 6.0 x 10(-4) rad(2) and 2.3 x 10(-3) rad(2) for binary, quadrature, 8-, and 16-PSK and 16QAM, respectively. With the knowledge of the allowable phase error variance, the corresponding laser linewidth tolerance can be predicted. We extend the phase error variance analysis of decision-aided maximum likelihood carrier phase estimation in M-ary PSK to 16QAM, and successfully predict the laser linewidth tolerance in different modulation formats, which agrees well with the Monte Carlo simulations. Finally, approximate BER expressions for different modulation formats are introduced to allow a quick estimation of the BER performance as a function of the phase error variance. Further, the BER approximations give a lower bound on the laser linewidth requirements in M-ary PSK and 16QAM. It is shown that as far as laser linewidth tolerance is concerned, 16QAM outperforms 16PSK which has the same spectral efficiency (SE), and has nearly the same performance as 8PSK which has lower SE. Thus, 16-QAM is a promising modulation format for high SE coherent optical communications.

  15. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    PubMed Central

    Darnaude, Audrey M.

    2016-01-01

    Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing

  16. Novel applications using maximum-likelihood estimation in optical metrology and nuclear medical imaging: Point-diffraction interferometry and BazookaPET

    NASA Astrophysics Data System (ADS)

    Park, Ryeojin

    This dissertation aims to investigate two different applications in optics using maximum-likelihood (ML) estimation. The first application of ML estimation is used in optical metrology. For this application, an innovative iterative search method called the synthetic phase-shifting (SPS) algorithm is proposed. This search algorithm is used for estimation of a wavefront that is described by a finite set of Zernike Fringe (ZF) polynomials. In this work, we estimate the ZF coefficient, or parameter values of the wavefront using a single interferogram obtained from a point-diffraction interferometer (PDI). In order to find the estimates, we first calculate the squared-difference between the measured and simulated interferograms. Under certain assumptions, this squared-difference image can be treated as an interferogram showing the phase difference between the true wavefront deviation and simulated wavefront deviation. The wavefront deviation is defined as the difference between the reference and the test wavefronts. We calculate the phase difference using a traditional phase-shifting technique without physical phase-shifters. We present a detailed forward model for the PDI interferogram, including the effect of the nite size of a detector pixel. The algorithm was validated with computational studies and its performance and constraints are discussed. A prototype PDI was built and the algorithm was also experimentally validated. A large wavefront deviation was successfully estimated without using null optics or physical phase-shifters. The experimental result shows that the proposed algorithm has great potential to provide an accurate tool for non-null testing. The second application of ML estimation is used in nuclear medical imaging. A high-resolution positron tomography scanner called BazookaPET is proposed. We have designed and developed a novel proof-of-concept detector element for a PET system called BazookaPET. In order to complete the PET configuration, at least

  17. Bayesian computation via empirical likelihood

    PubMed Central

    Mengersen, Kerrie L.; Pudlo, Pierre; Robert, Christian P.

    2013-01-01

    Approximate Bayesian computation has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the approximate Bayesian computation parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The Bayesian computation with empirical likelihood algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models. PMID:23297233

  18. Markov chain Monte Carlo without likelihoods.

    PubMed

    Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon

    2003-12-23

    Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.

  19. Refinement of a Bias-Correction Procedure for the Weighted Likelihood Estimator of Ability. Research Report. ETS RR-07-23

    ERIC Educational Resources Information Center

    Zhang, Jinming; Lu, Ting

    2007-01-01

    In practical applications of item response theory (IRT), item parameters are usually estimated first from a calibration sample. After treating these estimates as fixed and known, ability parameters are then estimated. However, the statistical inferences based on the estimated abilities can be misleading if the uncertainty of the item parameter…

  20. Software Size Estimation Using Activity Point

    NASA Astrophysics Data System (ADS)

    Densumite, S.; Muenchaisri, P.

    2017-03-01

    Software size is widely recognized as an important parameter for effort and cost estimation. Currently there are many methods for measuring software size including Source Line of Code (SLOC), Function Points (FP), Netherlands Software Metrics Users Association (NESMA), Common Software Measurement International Consortium (COSMIC), and Use Case Points (UCP). SLOC is physically counted after the software is developed. Other methods compute size from functional, technical, and/or environment aspects at early phase of software development. In this research, activity point approach is proposed to be another software size estimation method. Activity point is computed using activity diagram and adjusted with technical complexity factors (TCF), environment complexity factors (ECF), and people risk factors (PRF). An evaluation of the approach is present.

  1. Quasi-likelihood for Spatial Point Processes

    PubMed Central

    Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus

    2014-01-01

    Summary Fitting regression models for intensity functions of spatial point processes is of great interest in ecological and epidemiological studies of association between spatially referenced events and geographical or environmental covariates. When Cox or cluster process models are used to accommodate clustering not accounted for by the available covariates, likelihood based inference becomes computationally cumbersome due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation which in practise is solved numerically. The derivation of the optimal estimating function has close similarities to the derivation of quasi-likelihood for standard data sets. The approximate solution is further equivalent to a quasi-likelihood score for binary spatial data. We therefore use the term quasi-likelihood for our optimal estimating function approach. We demonstrate in a simulation study and a data example that our quasi-likelihood method for spatial point processes is both statistically and computationally efficient. PMID:26041970

  2. Technical Note: Calculation of standard errors of estimates of genetic parameters with the multiple-trait derivative-free restricted maximal likelihood programs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The MTDFREML (Boldman et al., 1995) set of programs was written to handle partially missing data in an expedient manner. When estimating (co)variance components and genetic parameters for multiple trait models, the programs have not been able to estimate standard errors of those estimates for multi...

  3. The phylogenetic likelihood library.

    PubMed

    Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A

    2015-03-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL).

  4. Determination of stability and control parameters of a light airplane from flight data using two estimation methods. [equation error and maximum likelihood methods

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1979-01-01

    Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.

  5. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  6. Robust Parameter Estimation for the Mixed Weibull (Seven Parameter) Including the Method of Minimum Likelihood and the Method of Minimum Distance

    DTIC Science & Technology

    1997-03-01

    TECHNOLOGY Wright-Patterson Air Force Base, Ohio DTGW.*1Ab-Q AFIT/GOR/ENY/97M- 1 ROBUST PARAMETER ESTIMATION FOR THE MIXED WEIBULL (SEVEN PARAMETER...of the Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree of...Force Instititute of Technology (1986). Bergman, B. "Estimation of Weibull Parameters using a weight function." Journal of Material Science Letters

  7. Monte Carlo studies of ocean wind vector measurements by SCATT: Objective criteria and maximum likelihood estimates for removal of aliases, and effects of cell size on accuracy of vector winds

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.

    1982-01-01

    The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.

  8. Maximum Likelihood Estimation of the Joint Covariance Matrix for Sections of Tests Given to Distinct Samples with Application to Test Equating.

    ERIC Educational Resources Information Center

    Thayer, Dorothy T.

    1983-01-01

    Estimation techniques for generating the covariance matrix for two new tests and an existing test without the necessity of any examinee having to take two complete tests is presented. An application of these techniques to linear, observed-score, test equating is presented. (Author/JKS)

  9. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance-Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    PubMed

    Molenaar, P C; Nesselroade, J R

    1998-07-01

    The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations -requires special modeling techniques. The dynamic factor model (DFM), which is a generalization of the traditional common factor model, has been proposed by Molenaar (1985) for systematically extracting information from multivariate time- series via latent variable modeling. Implementation of the DFM model has taken several forms, one of which involves specifying it as a covariance-structure model and estimating its parameters from a block-Toeplitz matrix derived from the multivariate time-ser~es. We compare two methods for estimating DFM parameters within a covariance-structure framework - pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation - by means of a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates of comparable precision, but only the ADF method gives standard errors and chi-square statistics that appear to be consistent. The relative ordering of the values of all estimates appears to be very similar across methods. When the manifest time-series is relatively short, the two methods appear to perform about equally well.

  10. List-mode likelihood

    PubMed Central

    Barrett, Harrison H.; White, Timothy; Parra, Lucas C.

    2010-01-01

    As photon-counting imaging systems become more complex, there is a trend toward measuring more attributes of each individual event. In various imaging systems the attributes can include several position variables, time variables, and energies. If more than about four attributes are measured for each event, it is not practical to record the data in an image matrix. Instead it is more efficient to use a simple list where every attribute is stored for every event. It is the purpose of this paper to discuss the concept of likelihood for such list-mode data. We present expressions for list-mode likelihood with an arbitrary number of attributes per photon and for both preset counts and preset time. Maximization of this likelihood can lead to a practical reconstruction algorithm with list-mode data, but that aspect is covered in a separate paper [IEEE Trans. Med. Imaging (to be published)]. An expression for lesion detectability for list-mode data is also derived and compared with the corresponding expression for conventional binned data. PMID:9379247

  11. Event-Related fMRI Studies of Episodic Encoding and Retrieval: Meta-Analyses Using Activation Likelihood Estimation

    ERIC Educational Resources Information Center

    Spaniol, Julia; Davidson, Patrick S. R.; Kim, Alice S. N.; Han, Hua; Moscovitch, Morris; Grady, Cheryl L.

    2009-01-01

    The recent surge in event-related fMRI studies of episodic memory has generated a wealth of information about the neural correlates of encoding and retrieval processes. However, interpretation of individual studies is hampered by methodological differences, and by the fact that sample sizes are typically small. We submitted results from studies of…

  12. Investigating bias in maximum-likelihood quantum-state tomography

    NASA Astrophysics Data System (ADS)

    Silva, G. B.; Glancy, S.; Vasconcelos, H. M.

    2017-02-01

    Maximum-likelihood quantum-state tomography yields estimators that are consistent, provided that the likelihood model is correct, but the maximum-likelihood estimators may have bias for any finite data set. The bias of an estimator is the difference between the expected value of the estimate and the true value of the parameter being estimated. This paper investigates bias in the widely used maximum-likelihood quantum-state tomography. Our goal is to understand how the amount of bias depends on factors such as the purity of the true state, the number of measurements performed, and the number of different bases in which the system is measured. For this, we perform numerical experiments that simulate optical homodyne tomography of squeezed thermal states under various conditions, perform tomography, and estimate bias in the purity of the estimated state. We find that estimates of higher purity states exhibit considerable bias, such that the estimates have lower purities than the true states.

  13. Estimation of humoral activity of Eleutherococcus senticosus.

    PubMed

    Drozd, Janina; Sawicka, Teresa; Prosińska, Joanna

    2002-01-01

    The aim of the present work was an estimation of the influence of two plant pharmaceutical preparations containing an extract from the root of Eleutherococcus senticosus: Argoeleuter tablets and Immuplant tablets, on the humoral response of immunological system. Experiments were performed with female Balb/c mice six weeks old. In order to reveal the influence of taking preparations, containing an extract from Eleutherococcus senticosus on some elements of the immunological system, three ways of their administration have been compared: before illness, during illness and a combination of both. The obtained results allow formulating the following conclusions: - the pharmaceutical preparations, containing the extract from Eleutherococcus senticosus administered orally, influence on the increase of the level of immunoglobulins comprised in the mice's blood serum, - the pharmaceutical preparations act with different power, not fully dependent on the content of marker of the active substance - eleutheroside E, - dosage of the preparations containing the extract from Eleutherococcus senticosus should not be established basing only on the extract content, - best curative results, measured as the stimulation of humoral response of the organism were obtained when a given preparation was administered therapeutically, even though the combined administration - prophylactically with prolonged administration during illness also is correct.

  14. SPT Lensing Likelihood: South Pole Telescope CMB lensing likelihood code

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Peiris, Hiranya V.; Verde, Licia

    2014-11-01

    The SPT lensing likelihood code, written in Fortran90, performs a Gaussian likelihood based upon the lensing potential power spectrum using a file from CAMB (ascl:1102.026) which contains the normalization required to get the power spectrum that the likelihood call is expecting.

  15. Decadal Variation of the Number of El Nino Onsets and El Nino-Related Months and Estimating the Likelihood of El Nino Onset in a Warming World

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2009-01-01

    Examination of the decadal variation of the number of El Nino onsets and El Nino-related months for the interval 1950-2008 clearly shows that the variation is better explained as one expressing normal fluctuation and not one related to global warming. Comparison of the recurrence periods for El Nino onsets against event durations for moderate/strong El Nino events results in a statistically important relationship that allows for the possible prediction of the onset for the next anticipated El Nino event. Because the last known El Nino was a moderate event of short duration (6 months), having onset in August 2006, unless it is a statistical outlier, one expects the next onset of El Nino probably in the latter half of 2009, with peak following in November 2009-January 2010. If true, then initial early extended forecasts of frequencies of tropical cyclones for the 2009 North Atlantic basin hurricane season probably should be revised slightly downward from near average-to-above average numbers to near average-to-below average numbers of tropical cyclones in 2009, especially as compared to averages since 1995, the beginning of the current high-activity interval for tropical cyclone activity.

  16. DALI: Derivative Approximation for LIkelihoods

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena

    2015-07-01

    DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

  17. Estimating ROI activity concentration with photon-processing and photon-counting SPECT imaging systems

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Recently a new class of imaging systems, referred to as photon-processing (PP) systems, are being developed that uses real-time maximum-likelihood (ML) methods to estimate multiple attributes per detected photon and store these attributes in a list format. PP systems could have a number of potential advantages compared to systems that bin photons based on attributes such as energy, projection angle, and position, referred to as photon-counting (PC) systems. For example, PP systems do not suffer from binning-related information loss and provide the potential to extract information from attributes such as energy deposited by the detected photon. To quantify the effects of this advantage on task performance, objective evaluation studies are required. We performed this study in the context of quantitative 2-dimensional single-photon emission computed tomography (SPECT) imaging with the end task of estimating the mean activity concentration within a region of interest (ROI). We first theoretically outline the effect of null space on estimating the mean activity concentration, and argue that due to this effect, PP systems could have better estimation performance compared to PC systems with noise-free data. To evaluate the performance of PP and PC systems with noisy data, we developed a singular value decomposition (SVD)-based analytic method to estimate the activity concentration from PP systems. Using simulations, we studied the accuracy and precision of this technique in estimating the activity concentration. We used this framework to objectively compare PP and PC systems on the activity concentration estimation task. We investigated the effects of varying the size of the ROI and varying the number of bins for the attribute corresponding to the angular orientation of the detector in a continuously rotating SPECT system. The results indicate that in several cases, PP systems offer improved estimation performance compared to PC systems.

  18. Computationally Efficient Composite Likelihood Statistics for Demographic Inference.

    PubMed

    Coffman, Alec J; Hsieh, Ping Hsun; Gravel, Simon; Gutenkunst, Ryan N

    2016-02-01

    Many population genetics tools employ composite likelihoods, because fully modeling genomic linkage is challenging. But traditional approaches to estimating parameter uncertainties and performing model selection require full likelihoods, so these tools have relied on computationally expensive maximum-likelihood estimation (MLE) on bootstrapped data. Here, we demonstrate that statistical theory can be applied to adjust composite likelihoods and perform robust computationally efficient statistical inference in two demographic inference tools: ∂a∂i and TRACTS. On both simulated and real data, the adjustments perform comparably to MLE bootstrapping while using orders of magnitude less computational time.

  19. A Monte Carlo comparison of the recovery of winds near upwind and downwind from the SASS-1 model function by means of the sum of squares algorithm and a maximum likelihood estimator

    NASA Technical Reports Server (NTRS)

    Pierson, W. J., Jr.

    1984-01-01

    Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.

  20. NON-REGULAR MAXIMUM LIKELIHOOD ESTIMATION

    EPA Science Inventory

    Even though a body of data on the environmental occurrence of medicinal, government-approved ("ethical") pharmaceuticals has been growing over the last two decades (the subject of this book), nearly nothing is known about the disposition of illicit (illegal) drugs in th...

  1. Likelihood Estimation for Generalized Mixed Exponential Distributions.

    DTIC Science & Technology

    1984-07-01

    specified beforehand. 23 S. . ... .- ~T§777 ~"~𔄁~ 7’.7 . -- ." F0 * 0 REFERENCES L. Armijo, "Minimization of Functions Having Lipschitz Continuous...and F. W. Fairman, Exponential Approximation via a Closed Form Gauss-Newton Method, IEEE Trans. Circuit Theory, CT-20 (1973), pp. 361-369. A. R...engineering disciplines of Chemical, Civil, Electrical , and Mechanical and Aerospace to newer, more specialized fields of Biomedical Engineering

  2. Accuracy of highly sexually active gay and bisexual men's predictions of their daily likelihood of anal sex and its relevance for intermittent event-driven HIV Pre-Exposure Prophylaxis

    PubMed Central

    Parsons, Jeffrey T.; Rendina, H. Jonathon; Grov, Christian; Ventuneac, Ana; Mustanski, Brian

    2014-01-01

    Objective We sought to examine highly sexually active gay and bisexual men's accuracy in predicting their sexual behavior for the purposes of informing future research on intermittent, event-driven HIV Pre-Exposure Prophylaxis (PrEP). Design For 30 days, 92 HIV-negative men completed a daily survey about their sexual behavior (n = 1,688 days of data) and indicated their likelihood of having anal sex with a casual male partner the following day. Method We utilized multilevel modeling to analyze the association between self-reported likelihood of and subsequent engagement in anal sex. Results We found a linear association between men's reported likelihood of anal sex with casual partners and the actual probability of engaging in sex, though men overestimated the likelihood of sex. Overall, we found that men were better at predicting when they would not have sex than when they would, particularly if any likelihood value greater than 0% was treated as indicative that sex might occur. We found no evidence that men's accuracy of prediction was affected by whether it was a weekend or whether they were using substances, though both did increase the probability of sex. Discussion These results suggested that, were men taking event-driven intermittent PrEP, 14% of doses could have been safely skipped with a minimal rate of false negatives using guidelines of taking a dose unless there was no chance (i.e., 0% likelihood) of sex on the following day. This would result in a savings of over $1,300 per year in medication costs per participant. PMID:25559594

  3. Revised activation estimates for silicon carbide

    SciTech Connect

    Heinisch, H.L.; Cheng, E.T.; Mann, F.M.

    1996-10-01

    Recent progress in nuclear data development for fusion energy systems includes a reevaluation of neutron activation cross sections for silicon and aluminum. Activation calculations using the newly compiled Fusion Evaluated Nuclear Data Library result in calculated levels of {sup 26}Al in irradiated silicon that are about an order of magnitude lower than the earlier calculated values. Thus, according to the latest internationally accepted nuclear data, SiC is much more attractive as a low activation material, even in first wall applications.

  4. Stepwise Signal Extraction via Marginal Likelihood

    PubMed Central

    Du, Chao; Kao, Chu-Lan Michael

    2015-01-01

    This paper studies the estimation of stepwise signal. To determine the number and locations of change-points of the stepwise signal, we formulate a maximum marginal likelihood estimator, which can be computed with a quadratic cost using dynamic programming. We carry out extensive investigation on the choice of the prior distribution and study the asymptotic properties of the maximum marginal likelihood estimator. We propose to treat each possible set of change-points equally and adopt an empirical Bayes approach to specify the prior distribution of segment parameters. Detailed simulation study is performed to compare the effectiveness of this method with other existing methods. We demonstrate our method on single-molecule enzyme reaction data and on DNA array CGH data. Our study shows that this method is applicable to a wide range of models and offers appealing results in practice. PMID:27212739

  5. Augmented Likelihood Image Reconstruction.

    PubMed

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  6. Sedentary behavior, physical activity, and likelihood of breast cancer among Black and White women: a report from the Southern Community Cohort Study.

    PubMed

    Cohen, Sarah S; Matthews, Charles E; Bradshaw, Patrick T; Lipworth, Loren; Buchowski, Maciej S; Signorello, Lisa B; Blot, William J

    2013-06-01

    Increased physical activity has been shown to be protective for breast cancer although few studies have examined this association in Black women. In addition, limited evidence to date indicates that sedentary behavior may be an independent risk factor for breast cancer. We examined sedentary behavior and physical activity in relation to subsequent incident breast cancer in a nested case-control study within 546 cases (374 among Black women) and 2,184 matched controls enrolled in the Southern Community Cohort Study. Sedentary and physically active behaviors were assessed via self-report at study baseline (2002-2009) using a validated physical activity questionnaire. Conditional logistic regression was used to estimate mutually adjusted ORs and corresponding 95% confidence intervals (CI) for quartiles of sedentary and physical activity measures in relation to breast cancer risk. Being in the highest versus lowest quartile of total sedentary behavior (≥ 12 vs. <5.5 h/d) was associated with increased odds of breast cancer among White women [OR, 1.94 (95% CI, 1.01-3.70); P trend = 0.1] but not Black women [OR, 1.23 (95% CI, 0.82-1.83); P trend = 0.6] after adjustment for physical activity. After adjustment for sedentary activity, greater physical activity was associated with reduced odds for breast cancer among White women (P trend = 0.03) only. In conclusion, independent of one another, sedentary behavior and physical activity are risk factors for breast cancer among White women. Differences in these associations between Black and White women require further investigation. Reducing sedentary behavior and increasing physical activity are potentially independent targets for breast cancer prevention interventions.

  7. Parametric likelihood inference for interval censored competing risks data.

    PubMed

    Hudgens, Michael G; Li, Chenxi; Fine, Jason P

    2014-03-01

    Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV.

  8. Profile Likelihood and Incomplete Data.

    PubMed

    Zhang, Zhiwei

    2010-04-01

    According to the law of likelihood, statistical evidence is represented by likelihood functions and its strength measured by likelihood ratios. This point of view has led to a likelihood paradigm for interpreting statistical evidence, which carefully distinguishes evidence about a parameter from error probabilities and personal belief. Like other paradigms of statistics, the likelihood paradigm faces challenges when data are observed incompletely, due to non-response or censoring, for instance. Standard methods to generate likelihood functions in such circumstances generally require assumptions about the mechanism that governs the incomplete observation of data, assumptions that usually rely on external information and cannot be validated with the observed data. Without reliable external information, the use of untestable assumptions driven by convenience could potentially compromise the interpretability of the resulting likelihood as an objective representation of the observed evidence. This paper proposes a profile likelihood approach for representing and interpreting statistical evidence with incomplete data without imposing untestable assumptions. The proposed approach is based on partial identification and is illustrated with several statistical problems involving missing data or censored data. Numerical examples based on real data are presented to demonstrate the feasibility of the approach.

  9. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  10. Estimating phytoplankton photosynthesis by active fluorescence

    SciTech Connect

    Falkowski, P.G.; Kolber, Z.

    1992-01-01

    Photosynthesis can be described by target theory, At low photon flux densities, photosynthesis is a linear function of irradiance (I), The number of reaction centers (n), their effective absorption capture cross section {sigma}, and a quantum yield {phi}. As photosynthesis becomes increasingly light saturated, an increased fraction of reaction centers close. At light saturation the maximum photosynthetic rate is given as the product of the number of reaction centers (n) and their maximum electron transport rate (I/{tau}). Using active fluorometry it is possible to measure non-destructively and in real time the fraction of open or closed reaction centers under ambient irradiance conditions in situ, as well as {sigma} and {phi} {tau} can be readily, calculated from knowledge of the light saturation parameter, I{sub k} (which can be deduced by in situ by active fluorescence measurements) and {sigma}. We built a pump and probe fluorometer, which is interfaced with a CTD. The instrument measures the fluorescence yield of a weak probe flash preceding (f{sub 0}) and succeeding (f{sub 0}) a saturating pump flash. Profiles of the these fluorescence yields are used to derive the instantaneous rate of gross photosynthesis in natural phytoplankton communities without any incubation. Correlations with short-term simulated in situ radiocarbon measurements are extremely high. The average slope between photosynthesis derived from fluorescence and that measured by radiocarbon is 1.15 and corresponds to the average photosynthetic quotient. The intercept is about 15% of the maximum radiocarbon uptake and corresponds to the average net community respiration. Profiles of photosynthesis and sections showing the variability in its composite parameters reveal a significant effect of nutrient availability on biomass specific rates of photosynthesis in the ocean.

  11. Estimating phytoplankton photosynthesis by active fluorescence

    SciTech Connect

    Falkowski, P.G.; Kolber, Z.

    1992-10-01

    Photosynthesis can be described by target theory, At low photon flux densities, photosynthesis is a linear function of irradiance (I), The number of reaction centers (n), their effective absorption capture cross section {sigma}, and a quantum yield {phi}. As photosynthesis becomes increasingly light saturated, an increased fraction of reaction centers close. At light saturation the maximum photosynthetic rate is given as the product of the number of reaction centers (n) and their maximum electron transport rate (I/{tau}). Using active fluorometry it is possible to measure non-destructively and in real time the fraction of open or closed reaction centers under ambient irradiance conditions in situ, as well as {sigma} and {phi} {tau} can be readily, calculated from knowledge of the light saturation parameter, I{sub k} (which can be deduced by in situ by active fluorescence measurements) and {sigma}. We built a pump and probe fluorometer, which is interfaced with a CTD. The instrument measures the fluorescence yield of a weak probe flash preceding (f{sub 0}) and succeeding (f{sub 0}) a saturating pump flash. Profiles of the these fluorescence yields are used to derive the instantaneous rate of gross photosynthesis in natural phytoplankton communities without any incubation. Correlations with short-term simulated in situ radiocarbon measurements are extremely high. The average slope between photosynthesis derived from fluorescence and that measured by radiocarbon is 1.15 and corresponds to the average photosynthetic quotient. The intercept is about 15% of the maximum radiocarbon uptake and corresponds to the average net community respiration. Profiles of photosynthesis and sections showing the variability in its composite parameters reveal a significant effect of nutrient availability on biomass specific rates of photosynthesis in the ocean.

  12. Be the Volume: A Classroom Activity to Visualize Volume Estimation

    ERIC Educational Resources Information Center

    Mikhaylov, Jessica

    2011-01-01

    A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…

  13. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  14. Nonparametric Bayes Factors Based On Empirical Likelihood Ratios

    PubMed Central

    Vexler, Albert; Deng, Wei; Wilding, Gregory E.

    2012-01-01

    Bayes methodology provides posterior distribution functions based on parametric likelihoods adjusted for prior distributions. A distribution-free alternative to the parametric likelihood is use of empirical likelihood (EL) techniques, well known in the context of nonparametric testing of statistical hypotheses. Empirical likelihoods have been shown to exhibit many of the properties of conventional parametric likelihoods. In this article, we propose and examine Bayes factors (BF) methods that are derived via the EL ratio approach. Following Kass & Wasserman [10], we consider Bayes factors type decision rules in the context of standard statistical testing techniques. We show that the asymptotic properties of the proposed procedure are similar to the classical BF’s asymptotic operating characteristics. Although we focus on hypothesis testing, the proposed approach also yields confidence interval estimators of unknown parameters. Monte Carlo simulations were conducted to evaluate the theoretical results as well as to demonstrate the power of the proposed test. PMID:23180904

  15. On the likelihood of forests

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  16. Maximum likelihood continuity mapping for fraud detection

    SciTech Connect

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  17. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2007-02-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods.

  18. Shrinkage approach for EEG covariance matrix estimation.

    PubMed

    Beltrachini, Leandro; von Ellenrieder, Nicolas; Muravchik, Carlos H

    2010-01-01

    We present a shrinkage estimator for the EEG spatial covariance matrix of the background activity. We show that such an estimator has some advantages over the maximum likelihood and sample covariance estimators when the number of available data to carry out the estimation is low. We find sufficient conditions for the consistency of the shrinkage estimators and results concerning their numerical stability. We compare several shrinkage schemes and show how to improve the estimator by incorporating known structure of the covariance matrix.

  19. Comparison between geochemical and biological estimates of subsurface microbial activities.

    PubMed

    Phelps, T J; Murphy, E M; Pfiffner, S M; White, D C

    1994-01-01

    Geochemical and biological estimates of in situ microbial activities were compared from the aerobic and microaerophilic sediments of the Atlantic Coastal Plain. Radioisotope time-course experiments suggested oxidation rates greater than millimolar quantities per year for acetate and glucose. Geochemical analyses assessing oxygen consumption, soluble organic carbon utilization, sulfate reduction, and carbon dioxide production suggested organic oxidation rates of nano- to micromolar quantities per year. Radiotracer timecourse experiments appeared to overestimate rates of organic carbon oxidation, sulfate reduction, and biomass production by a factor of 10(3)-10(6) greater than estimates calculated from groundwater analyses. Based on the geochemical evidence, in situ microbial metabolism was estimated to be in the nano- to micromolar range per year, and the average doubling time for the microbial community was estimated to be centuries.

  20. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  1. Human ECG signal parameters estimation during controlled physical activity

    NASA Astrophysics Data System (ADS)

    Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz

    2015-09-01

    ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.

  2. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  3. EIA Completes Corrections to Drilling Activity Estimates Series

    EIA Publications

    1999-01-01

    The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.

  4. EIA Corrects Errors in Its Drilling Activity Estimates Series

    EIA Publications

    1998-01-01

    The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.

  5. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  6. Maximum Likelihood Factor Structure of the Family Environment Scale.

    ERIC Educational Resources Information Center

    Fowler, Patrick C.

    1981-01-01

    Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)

  7. Estimating evaporative vapor generation from automobiles based on parking activities.

    PubMed

    Dong, Xinyi; Tschantz, Michael; Fu, Joshua S

    2015-07-01

    A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity.

  8. Using multiple linear regression model to estimate thunderstorm activity

    NASA Astrophysics Data System (ADS)

    Suparta, W.; Putro, W. S.

    2017-03-01

    This paper is aimed to develop a numerical model with the use of a nonlinear model to estimate the thunderstorm activity. Meteorological data such as Pressure (P), Temperature (T), Relative Humidity (H), cloud (C), Precipitable Water Vapor (PWV), and precipitation on a daily basis were used in the proposed method. The model was constructed with six configurations of input and one target output. The output tested in this work is the thunderstorm event when one-year data is used. Results showed that the model works well in estimating thunderstorm activities with the maximum epoch reaching 1000 iterations and the percent error was found below 50%. The model also found that the thunderstorm activities in May and October are detected higher than the other months due to the inter-monsoon season.

  9. Multiplicative earthquake likelihood models incorporating strain rates

    NASA Astrophysics Data System (ADS)

    Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.

    2017-01-01

    SUMMARYWe examine the potential for strain-rate variables to improve long-term earthquake <span class="hlt">likelihood</span> models. We derive a set of multiplicative hybrid earthquake <span class="hlt">likelihood</span> models in which cell rates in a spatially uniform baseline model are scaled using combinations of covariates derived from earthquake catalogue data, fault data, and strain-rates for the New Zealand region. Three components of the strain rate <span class="hlt">estimated</span> from GPS data over the period 1991-2011 are considered: the shear, rotational and dilatational strain rates. The hybrid model parameters are optimised for earthquakes of M 5 and greater over the period 1987-2006 and tested on earthquakes from the period 2012-2015, which is independent of the strain rate <span class="hlt">estimates</span>. The shear strain rate is overall the most informative individual covariate, as indicated by Molchan error diagrams as well as multiplicative modelling. Most models including strain rates are significantly more informative than the best models excluding strain rates in both the fitting and testing period. A hybrid that combines the shear and dilatational strain rates with a smoothed seismicity covariate is the most informative model in the fitting period, and a simpler model without the dilatational strain rate is the most informative in the testing period. These results have implications for probabilistic seismic hazard analysis and can be used to improve the background model component of medium-term and short-term earthquake forecasting models.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ESASP.740E.384L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ESASP.740E.384L"><span>Geosynchronous SAR Orbit <span class="hlt">Estimation</span> Based on <span class="hlt">Active</span> Radar Calibrators</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Leanza, Antonio; Monti Guarnieri, Andrea; Boroquets Ibars, Antoni</p> <p>2016-08-01</p> <p>The Geosynchronous SAR (GEOSAR) is a system designed for continuous monitoring of a fixed region of the Earth. Differently from LEOSAR, the GEOSAR system requires very long times to form its Synthetic Aperture (SA). This entails the onset of several decorrelation sources, such as atmosphere propagation, orbit perturbations, clock drifts, that have to be compensated to avoid defocusing. In this paper, in particular, it is proposed a solution to cope with the phase error introduced by orbit perturbations within the SA by means of some <span class="hlt">Active</span> Radar Calibrators (ARC) deployed at convenient positions in the illuminated area. Each ARC provides two-way pulse by pulse echo delay and carrier phase observations used to track the satellite position. The <span class="hlt">estimation</span> follows an iterative approach whose steps are dividing the SA in sub-apertures, performing the <span class="hlt">estimation</span> for each sub-aperture, applying the <span class="hlt">estimated</span> orbit correction and repeating for longer sub-apertures.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2601654','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2601654"><span>Approximate <span class="hlt">likelihood</span> for large irregularly spaced spatial data</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fuentes, Montserrat</p> <p>2008-01-01</p> <p>SUMMARY <span class="hlt">Likelihood</span> approaches for large irregularly spaced spatial datasets are often very difficult, if not infeasible, to implement due to computational limitations. Even when we can assume normality, exact calculations of the <span class="hlt">likelihood</span> for a Gaussian spatial process observed at n locations requires O(n3) operations. We present a version of Whittle’s approximation to the Gaussian log <span class="hlt">likelihood</span> for spatial regular lattices with missing values and for irregularly spaced datasets. This method requires O(nlog2n) operations and does not involve calculating determinants. We present simulations and theoretical results to show the benefits and the performance of the spatial <span class="hlt">likelihood</span> approximation method presented here for spatial irregularly spaced datasets and lattices with missing values. We apply these methods to <span class="hlt">estimate</span> the spatial structure of sea surface temperatures (SST) using satellite data with missing values. PMID:19079638</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950016888','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950016888"><span>On-line, adaptive state <span class="hlt">estimator</span> for <span class="hlt">active</span> noise control</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lim, Tae W.</p> <p>1994-01-01</p> <p>Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the <span class="hlt">active</span> noise control system using feedback control. This research investigates the development of an adaptive, on-line state <span class="hlt">estimator</span> using a neural network concept to conduct <span class="hlt">active</span> noise control. In this research, an algorithm has been developed that can be used to <span class="hlt">estimate</span> displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are <span class="hlt">estimated</span> with reasonable accuracy at any locations on the structure. The accuracy of the response <span class="hlt">estimates</span> depends on the number of modes incorporated in the <span class="hlt">estimates</span> and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical <span class="hlt">active</span> noise control</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA531307','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA531307"><span>Optimizing <span class="hlt">Estimated</span> Loss Reduction for <span class="hlt">Active</span> Sampling in Rank Learning</span></a></p> <p><a target="_blank" href="https://publicaccess.dtic.mil/psm/api/service/search/search">DTIC Science & Technology</a></p> <p></p> <p>2008-01-01</p> <p>ranging from the income level to age and her preference order over a set of products (e.g. movies in Netflix ). The ranking task is to learn a map- ping...learners in RankBoost. However, in both cases, the proposed strategy selects the samples which are <span class="hlt">estimated</span> to produce a faster convergence from the...steps in Section 5. 2. Related Work A number of strategies have been proposed for <span class="hlt">active</span> learning in the classification framework. Some of those center</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26736301','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26736301"><span>Parametric <span class="hlt">estimation</span> of sample entropy for physical <span class="hlt">activity</span> recognition.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Aktaruzzaman, Md; Scarabottolo, Nello; Sassi, Roberto</p> <p>2015-08-01</p> <p>Insufficient amount of physical <span class="hlt">activity</span>, and hence storage of calories may lead depression, obesity, cardiovascular diseases, and diabetes. The amount of consumed calorie depends on the type of <span class="hlt">activity</span>. The recognition of physical <span class="hlt">activity</span> is very important to <span class="hlt">estimate</span> the amount of calories spent by a subject every day. There are some research works already published in the literature for <span class="hlt">activity</span> recognition through accelerometers (body worn sensors). The accuracy of any recognition system depends on the robustness of selected features and classifiers. The typical features reported for most physical <span class="hlt">activities</span> recognitions are autoregressive coefficients (ARcoeffs), signal magnitude area (SMA), tilt angle (TA), and standard deviation (STD). In this study, we have studied the feasibility of using single value of sample entropy <span class="hlt">estimated</span> parametrically (SETH) of an AR model instead of ARcoeffs. After feasibility study, we also compared the recognition accuracies between two popular classifiers ı.e. artificial neural network (ANN) and support vector machines (SVM). The recognition accuracies using linear structure (where all types of <span class="hlt">activities</span> are classified using a single classifier) and hierarchical structure (where <span class="hlt">activities</span> are first divided into static and dynamic events, and then <span class="hlt">activities</span> of each event are classified in the second stage). The study showed that the use of SETH provides similar recognition accuracy (69.82%) as provided by ARcoeffs (67.67%) using ANN. The linear structure of SVM performs better (average accuracy of SVM: 98.22%) than linear ANN (average accuracy with ANN: 94.78%). The use of hierarchical structure of ANN increases the average recognition accuracy of static <span class="hlt">activities</span> to about 100%. However, no significant changes are observed using hierarchical SVM than the linear one.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3878376','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3878376"><span>Altered <span class="hlt">likelihood</span> of brain <span class="hlt">activation</span> in attention and working memory networks in patients with multiple sclerosis: An ALE meta-analysis☆</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kollndorfer, K.; Krajnik, J.; Woitek, R.; Freiherr, J.; Prayer, D.; Schöpf, V.</p> <p>2013-01-01</p> <p>Multiple sclerosis (MS) is a chronic neurological disease, frequently affecting attention and working memory functions. Functional imaging studies investigating those functions in MS patients are hard to compare, as they include heterogeneous patient groups and use different paradigms for cognitive testing. The aim of this study was to investigate alterations in neuronal <span class="hlt">activation</span> between MS patients and healthy controls performing attention and working memory tasks. Two meta-analyses of previously published fMRI studies investigating attention and working memory were conducted for MS patients and healthy controls, respectively. Resulting maps were contrasted to compare brain <span class="hlt">activation</span> in patients and healthy controls. Significantly increased brain <span class="hlt">activation</span> in the inferior parietal lobule and the dorsolateral prefrontal cortex was detected for healthy controls. In contrast, higher neuronal <span class="hlt">activation</span> in MS patients was obtained in the left ventrolateral prefrontal cortex and the right premotor area. With this meta-analytic approach previous results of investigations examining cognitive function using fMRI are summarized and compared. Therefore a more general view on cognitive dysfunction in this heterogeneous disease is enabled. PMID:24056084</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040089066&hterms=Myocardial+infarction&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DMyocardial%2Binfarction','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040089066&hterms=Myocardial+infarction&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DMyocardial%2Binfarction"><span>On the precision of automated <span class="hlt">activation</span> time <span class="hlt">estimation</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.</p> <p>1988-01-01</p> <p>We examined how the assignment of local <span class="hlt">activation</span> times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the <span class="hlt">estimation</span> of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly <span class="hlt">estimating</span> fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21198611','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21198611"><span>Confidence interval of the <span class="hlt">likelihood</span> ratio associated with mixed stain DNA evidence.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Beecham, Gary W; Weir, Bruce S</p> <p>2011-01-01</p> <p><span class="hlt">Likelihood</span> ratios are necessary to properly interpret mixed stain DNA evidence. They can flexibly consider alternate hypotheses and can account for population substructure. The <span class="hlt">likelihood</span> ratio should be seen as an <span class="hlt">estimate</span> and not a fixed value, because the calculations are functions of allelic frequency <span class="hlt">estimates</span> that were <span class="hlt">estimated</span> from a small portion of the population. Current methods do not account for uncertainty in the <span class="hlt">likelihood</span> ratio <span class="hlt">estimates</span> and are therefore an incomplete picture of the strength of the evidence. We propose the use of a confidence interval to report the consequent variation of <span class="hlt">likelihood</span> ratios. The confidence interval is calculated using the standard forensic <span class="hlt">likelihood</span> ratio formulae and a variance <span class="hlt">estimate</span> derived using the Taylor expansion. The formula is explained, and a computer program has been made available. Numeric work shows that the evidential strength of DNA profiles decreases as the variation among populations increases.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.epa.gov/superfund/section-9-ground-water-likelihood-release','PESTICIDES'); return false;" href="https://www.epa.gov/superfund/section-9-ground-water-likelihood-release"><span>Section 9: Ground Water - <span class="hlt">Likelihood</span> of Release</span></a></p> <p><a target="_blank" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>HRS training. the ground water pathway <span class="hlt">likelihood</span> of release factor category reflects the <span class="hlt">likelihood</span> that there has been, or will be, a release of hazardous substances in any of the aquifers underlying the site.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19800010218&hterms=378&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dp%25EF%25BF%25BD%2526%2523378','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19800010218&hterms=378&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dp%25EF%25BF%25BD%2526%2523378"><span>MSFC solar <span class="hlt">activity</span> predictions for satellite orbital lifetime <span class="hlt">estimation</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Fuler, H. C.; Lundquist, C. A.; Vaughan, W. W.</p> <p>1979-01-01</p> <p>The procedure to predict solar <span class="hlt">activity</span> indexes for use in upper atmosphere density models is given together with an example of the performance. The prediction procedure employs a least square linear regression model to generate the predicted smoothed vinculum R sub 13 and geomagnetic vinculum A sub p(13) values. Linear regression equations are then employed to compute corresponding vinculum F sub 10.7(13) solar flux values from the predicted vinculum R sub 13 values. The output is issued principally for satellite orbital lifetime <span class="hlt">estimations</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930071715&hterms=photosynthetic+pigments&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dphotosynthetic%2Bpigments','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930071715&hterms=photosynthetic+pigments&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dphotosynthetic%2Bpigments"><span>Specularly modified vegetation indices to <span class="hlt">estimate</span> photosynthetic <span class="hlt">activity</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rondeaux, G.; Vanderbilt, V. C.</p> <p>1993-01-01</p> <p>The hypothesis tested was that some part of the ecosystem-dependent variability of vegetation indices was attributable to the effects of light specularly reflected by leaves. 'Minus specular' indices were defined excluding effects of specular light which contains no cellular pigment information. Results, both empirical and theoretical, show that the 'minus specular' indices, when compared to the traditional vegetation indices, potentially provide better <span class="hlt">estimates</span> of the photosynthetic <span class="hlt">activity</span> within a canopy - and therefore canopy primary production - specifically as a function of sun and view angles.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28122669','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28122669"><span><span class="hlt">Active</span> immunization of cattle with a bothropic toxoid does not abrogate envenomation by Bothrops asper venom, but increases the <span class="hlt">likelihood</span> of survival.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Herrera, María; González, Katherine; Rodríguez, Carlos; Gómez, Aarón; Segura, Álvaro; Vargas, Mariángela; Villalta, Mauren; Estrada, Ricardo; León, Guillermo</p> <p>2017-03-01</p> <p>This study assessed the protective effect of <span class="hlt">active</span> immunization of cattle to prevent the envenomation induced by B. asper venom. Two groups of oxen were immunized with a bothropic toxoid and challenged by an intramuscular injection of either 10 or 50 mg B. asper venom, to induce moderate or severe envenomations, respectively. Non-immunized oxen were used as controls. It was found that immunized oxen developed local edema similar to those observed in non-immunized animals. However, systemic effects were totally prevented in immunized oxen challenged with 10 mg venom, and therefore antivenom treatment was not required. When immunized oxen were challenged with 50 mg venom, coagulopathy was manifested 3-16 h later than in non-immunized oxen, demonstrating a delay in the onset of systemic envenomation. In these animals, <span class="hlt">active</span> immunization did not eliminate the need for antivenom treatment, but increased the time lapse in which antivenom administration is still effective. All experimentally envenomed oxen completely recovered after a week following venom injection. Our results suggest that immunization of cattle with a bothropic toxoid prevents the development of systemic effects in moderate envenomations by B. asper, but does not abrogate these effects in severe envenomation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ascl.soft01004M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ascl.soft01004M"><span>CosmoSlik: Cosmology sampler of <span class="hlt">likelihoods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Millea, Marius</p> <p>2017-01-01</p> <p>CosmoSlik quickly puts together, runs, and analyzes an MCMC chain for analysis of cosmological data. It is highly modular and comes with plugins for CAMB (ascl:1102.026), CLASS (ascl:1106.020), the Planck <span class="hlt">likelihood</span>, the South Pole Telescope <span class="hlt">likelihood</span>, other cosmological <span class="hlt">likelihoods</span>, emcee (ascl:1303.002), and more. It offers ease-of-use, flexibility, and modularity.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25107832','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25107832"><span>Improved maximum <span class="hlt">likelihood</span> reconstruction of complex multi-generational pedigrees.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sheehan, Nuala A; Bartlett, Mark; Cussens, James</p> <p>2014-11-01</p> <p>The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. <span class="hlt">Likelihood</span>-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high <span class="hlt">likelihood</span> but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal <span class="hlt">likelihood</span> for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum <span class="hlt">likelihood</span> pedigree, we can obtain any number of pedigrees in decreasing order of <span class="hlt">likelihood</span>. This is useful for assessing the uncertainty of a maximum <span class="hlt">likelihood</span> solution and permits model averaging over high <span class="hlt">likelihood</span> pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum <span class="hlt">likelihood</span> pedigree <span class="hlt">estimates</span> which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum <span class="hlt">likelihood</span> solution. Our approach hence allows us to properly address the question of whether a reasonably high <span class="hlt">likelihood</span> solution that is easy to obtain is practically as</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997AJ....114..228M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997AJ....114..228M"><span>Recovering Velocity Distributions Via Penalized <span class="hlt">Likelihood</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Merritt, David</p> <p>1997-07-01</p> <p>Line-of-sight velocity distributions are crucial for unravelling the dynamics of hot stellar systems. We present a new formalism based on penalized <span class="hlt">likelihood</span> for deriving such distributions from kinematical data, and evaluate the performance of two algorithms that extract N(V) from absorption-line spectra and from sets of individual velocities. Both algorithms are superior to existing ones in that the solutions are nearly unbiased even when the data are so poor that a great deal of smoothing is required. In addition, the discrete-velocity algorithm is able to remove a known distribution of measurement errors from the <span class="hlt">estimate</span> of N(V). The formalism is used to recover the velocity distribution of stars in five fields near the center of the globular cluster omega Centauri.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790012619','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790012619"><span>Approximate maximum <span class="hlt">likelihood</span> decoding of block codes</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Greenberger, H. J.</p> <p>1979-01-01</p> <p>Approximate maximum <span class="hlt">likelihood</span> decoding algorithms, based upon selecting a small set of candidate code words with the aid of the <span class="hlt">estimated</span> probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Object+AND+Oriented+AND+Data+AND+Analysis&pg=3&id=EJ631336','ERIC'); return false;" href="http://eric.ed.gov/?q=Object+AND+Oriented+AND+Data+AND+Analysis&pg=3&id=EJ631336"><span>A Unified Maximum <span class="hlt">Likelihood</span> Approach to Document Retrieval.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Bodoff, David; Enache, Daniel; Kambil, Ajit; Simon, Gary; Yukhimets, Alex</p> <p>2001-01-01</p> <p>Addresses the query- versus document-oriented dichotomy in information retrieval. Introduces a maximum <span class="hlt">likelihood</span> approach to utilizing feedback data that can be used to construct a concrete object function that <span class="hlt">estimates</span> both document and query parameters in accordance with all available feedback data. (AEF)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23832289','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23832289"><span>Fast inference in generalized linear models via expected log-<span class="hlt">likelihoods</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ramirez, Alexandro D; Paninski, Liam</p> <p>2014-04-01</p> <p>Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the <span class="hlt">likelihood</span> in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-<span class="hlt">likelihood</span> by an expectation over the model covariates; the resulting "expected log-<span class="hlt">likelihood</span>" can in many cases be computed significantly faster than the exact log-<span class="hlt">likelihood</span>. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-<span class="hlt">likelihood</span> approximation becomes particularly useful; for example, <span class="hlt">estimators</span> based on maximizing this expected log-<span class="hlt">likelihood</span> (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum <span class="hlt">likelihood</span> <span class="hlt">estimators</span>. A risk analysis establishes that these maximum EL <span class="hlt">estimators</span> often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum <span class="hlt">likelihood</span> <span class="hlt">estimates</span>. Finally, we find that these methods can significantly decrease the computation time of marginal <span class="hlt">likelihood</span> calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12071420','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12071420"><span>Semiparametric maximum <span class="hlt">likelihood</span> for nonlinear regression with measurement errors.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Suh, Eun-Young; Schafer, Daniel W</p> <p>2002-06-01</p> <p>This article demonstrates semiparametric maximum <span class="hlt">likelihood</span> <span class="hlt">estimation</span> of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum <span class="hlt">likelihood</span> for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013APS..DNP.FB001L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013APS..DNP.FB001L"><span>Maximum <span class="hlt">Likelihood</span> Analysis in the PEN Experiment</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lehman, Martin</p> <p>2013-10-01</p> <p>The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes <span class="hlt">active</span> beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum <span class="hlt">likelihood</span> analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum <span class="hlt">likelihood</span> analysis will be presented. Work supported by NSF grant PHY-0970013.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19850056239&hterms=Corn&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DCorn','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19850056239&hterms=Corn&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DCorn"><span>Spectral <span class="hlt">estimators</span> of absorbed photosynthetically <span class="hlt">active</span> radiation in corn canopies</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gallo, K. P.; Daughtry, C. S. T.; Bauer, M. E.</p> <p>1985-01-01</p> <p>Most models of crop growth and yield require an <span class="hlt">estimate</span> of canopy leaf area index (LAI) or absorption of radiation. Relationships between photosynthetically <span class="hlt">active</span> radiation (PAR) absorbed by corn canopies and the spectral reflectance of the canopies were investigated. Reflectance factor data were acquired with a Landsat MSS band radiometer. From planting to silking, the three spectrally predicted vegetation indices examined were associated with more than 95 percent of the variability in absorbed PAR. The relationships developed between absorbed PAR and the three indices were evaluated with reflectance factor data acquired from corn canopies planted in 1979 through 1982. Seasonal cumulations of measured LAI and each of the three indices were associated with greater than 50 percent of the variation in final grain yields from the test years. Seasonal cumulations of daily absorbed PAR were associated with up to 73 percent of the variation in final grain yields. Absorbed PAR, cumulated through the growing season, is a better indicator of yield than cumulated leaf area index. Absorbed PAR may be <span class="hlt">estimated</span> reliably from spectral reflectance data of crop canopies.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850007933','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850007933"><span>Spectral <span class="hlt">estimators</span> of absorbed photosynthetically <span class="hlt">active</span> radiation in corn canopies</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gallo, K. P.; Daughtry, C. S. T.; Bauer, M. E.</p> <p>1984-01-01</p> <p>Most models of crop growth and yield require an <span class="hlt">estimate</span> of canopy leaf area index (LAI) or absorption of radiation. Relationships between photosynthetically <span class="hlt">active</span> radiation (PAR) absorbed by corn canopies and the spectral reflectance of the canopies were investigated. Reflectance factor data were acquired with a LANDSAT MSS band radiometer. From planting to silking, the three spectrally predicted vegetation indices examined were associated with more than 95% of the variability in absorbed PAR. The relationships developed between absorbed PAR and the three indices were evaluated with reflectance factor data acquired from corn canopies planted in 1979 through 1982. Seasonal cumulations of measured LAI and each of the three indices were associated with greater than 50% of the variation in final grain yields from the test years. Seasonal cumulations of daily absorbed PAR were associated with up to 73% of the variation in final grain yields. Absorbed PAR, cumulated through the growing season, is a better indicator of yield than cumulated leaf area index. Absorbed PAR may be <span class="hlt">estimated</span> reliably from spectral reflectance data of crop canopies.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997JGR...10229717L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997JGR...10229717L"><span><span class="hlt">Estimation</span> of photosynthetically <span class="hlt">active</span> radiation absorbed at the surface</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Zhanqing; Moreau, Louis; Cihlar, Josef</p> <p>1997-12-01</p> <p>This paper presents a validation and application of an algorithm by Li and Moreau [1996] for retrieving photosynthetically <span class="hlt">active</span> radiation (PAR) absorbed at the surface (APARSFC). APARSFC is a key input to <span class="hlt">estimating</span> PAR absorbed by the green canopy during photosynthesis. Extensive ground-based and space-borne observations collected during the BOREAS experiment in 1994 were processed, colocated, and analyzed. They include downwelling and upwelling PAR observed at three flux towers, aerosol optical depth from ground-based photometers, and satellite reflectance measurements at the top of the atmosphere. The effects of three-dimensional clouds, aerosols, and bidirectional dependence on the retrieval of APARSFC were examined. While the algorithm is simple and has only three input parameters, the comparison between observed and <span class="hlt">estimated</span> APARSFC shows a small bias error (<10 W m-2) and moderate random error (36 W m-2 for clear, 61 W m-2 for cloudy). Temporal and/or spatial mismatch between satellite and surface observations is a major cause of the random error, especially when broken clouds are present. The algorithm was subsequently employed to map the distribution of monthly mean APARSFC over the 1000×1000 km2 BOREAS region. Considerable spatial variation is found due to variable cloudiness, forest fires, and nonuniform surface albedo.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11686443','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11686443"><span><span class="hlt">Likelihood</span> maximization for list-mode emission tomographic image reconstruction.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Byrne, C</p> <p>2001-10-01</p> <p>The maximum a posteriori (MAP) Bayesian iterative algorithm using priors that are gamma distributed, due to Lange, Bahn and Little, is extended to include parameter choices that fall outside the gamma distribution model. Special cases of the resulting iterative method include the expectation maximization maximum <span class="hlt">likelihood</span> (EMML) method based on the Poisson model in emission tomography, as well as algorithms obtained by Parra and Barrett and by Huesman et al. that converge to maximum <span class="hlt">likelihood</span> and maximum conditional <span class="hlt">likelihood</span> <span class="hlt">estimates</span> of radionuclide intensities for list-mode emission tomography. The approach taken here is optimization-theoretic and does not rely on the usual expectation maximization (EM) formalism. Block-iterative variants of the algorithms are presented. A self-contained, elementary proof of convergence of the algorithm is included.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4374573','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4374573"><span>Fast inference in generalized linear models via expected log-<span class="hlt">likelihoods</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ramirez, Alexandro D.; Paninski, Liam</p> <p>2015-01-01</p> <p>Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the <span class="hlt">likelihood</span> in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-<span class="hlt">likelihood</span> by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-<span class="hlt">likelihood</span>. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-<span class="hlt">likelihood</span> approximation becomes particularly useful; for example, <span class="hlt">estimators</span> based on maximizing this expected log-<span class="hlt">likelihood</span> (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum <span class="hlt">likelihood</span> <span class="hlt">estimators</span>. A risk analysis establishes that these maximum EL <span class="hlt">estimators</span> often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum <span class="hlt">likelihood</span> <span class="hlt">estimates</span>. Finally, we find that these methods can significantly decrease the computation time of marginal <span class="hlt">likelihood</span> calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3612471','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3612471"><span>An Empirical <span class="hlt">Likelihood</span> Method for Semiparametric Linear Regression with Right Censored Data</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fang, Kai-Tai; Li, Gang; Lu, Xuyang; Qin, Hong</p> <p>2013-01-01</p> <p>This paper develops a new empirical <span class="hlt">likelihood</span> method for semiparametric linear regression with a completely unknown error distribution and right censored survival data. The method is based on the Buckley-James (1979) <span class="hlt">estimating</span> equation. It inherits some appealing properties of the complete data empirical <span class="hlt">likelihood</span> method. For example, it does not require variance <span class="hlt">estimation</span> which is problematic for the Buckley-James <span class="hlt">estimator</span>. We also extend our method to incorporate auxiliary information. We compare our method with the synthetic data empirical <span class="hlt">likelihood</span> of Li and Wang (2003) using simulations. We also illustrate our method using Stanford heart transplantation data. PMID:23573169</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1156870','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1156870"><span>Vestige: Maximum <span class="hlt">likelihood</span> phylogenetic footprinting</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wakefield, Matthew J; Maxwell, Peter; Huttley, Gavin A</p> <p>2005-01-01</p> <p>Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-<span class="hlt">likelihood</span> statistical framework, the complex interplay between mutational processes, DNA repair and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JNEng..12f6011K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JNEng..12f6011K"><span>Optimal stimulus scheduling for <span class="hlt">active</span> <span class="hlt">estimation</span> of evoked brain networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kafashan, MohammadMehdi; Ching, ShiNung</p> <p>2015-12-01</p> <p>Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity <span class="hlt">estimation</span>. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to <span class="hlt">actively</span> probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21655319','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21655319"><span>Confidence interval based parameter <span class="hlt">estimation</span>--a new SOCR applet and <span class="hlt">activity</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Christou, Nicolas; Dinov, Ivo D</p> <p>2011-01-01</p> <p>Many scientific investigations depend on obtaining data-driven, accurate, robust and computationally-tractable parameter <span class="hlt">estimates</span>. In the face of unavoidable intrinsic variability, there are different algorithmic approaches, prior assumptions and fundamental principles for computing point and interval <span class="hlt">estimates</span>. Efficient and reliable parameter <span class="hlt">estimation</span> is critical in making inference about observable experiments, summarizing process characteristics and prediction of experimental behaviors. In this manuscript, we demonstrate simulation, construction, validation and interpretation of confidence intervals, under various assumptions, using the interactive web-based tools provided by the Statistics Online Computational Resource (http://www.SOCR.ucla.edu). Specifically, we present confidence interval examples for population means, with known or unknown population standard deviation; population variance; population proportion (exact and approximate), as well as confidence intervals based on bootstrapping or the asymptotic properties of the maximum <span class="hlt">likelihood</span> <span class="hlt">estimates</span>. Like all SOCR resources, these confidence interval resources may be openly accessed via an Internet-connected Java-enabled browser. The SOCR confidence interval applet enables the user to empirically explore and investigate the effects of the confidence-level, the sample-size and parameter of interest on the corresponding confidence interval. Two applications of the new interval <span class="hlt">estimation</span> computational library are presented. The first one is a simulation of confidence interval <span class="hlt">estimating</span> the US unemployment rate and the second application demonstrates the computations of point and interval <span class="hlt">estimates</span> of hippocampal surface complexity for Alzheimers disease patients, mild cognitive impairment subjects and asymptomatic controls.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NatCo...814796E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NatCo...814796E"><span>Complex picture for <span class="hlt">likelihood</span> of ENSO-driven flood hazard</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Emerton, R.; Cloke, H. L.; Stephens, E. M.; Zsoter, E.; Woolnough, S. J.; Pappenberger, F.</p> <p>2017-03-01</p> <p>El Niño and La Niña events, the extremes of ENSO climate variability, influence river flow and flooding at the global scale. <span class="hlt">Estimates</span> of the historical probability of extreme (high or low) precipitation are used to provide vital information on the <span class="hlt">likelihood</span> of adverse impacts during extreme ENSO events. However, the nonlinearity between precipitation and flood magnitude motivates the need for <span class="hlt">estimation</span> of historical probabilities using analysis of hydrological data sets. Here, this analysis is undertaken using the ERA-20CM-R river flow reconstruction for the twentieth century. Our results show that the <span class="hlt">likelihood</span> of increased or decreased flood hazard during ENSO events is much more complex than is often perceived and reported; probabilities vary greatly across the globe, with large uncertainties inherent in the data and clear differences when comparing the hydrological analysis to precipitation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/6974162','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/6974162"><span>A <span class="hlt">likelihood</span> approach to calculating risk support intervals</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Leal, S.M.; Ott, J. )</p> <p>1994-05-01</p> <p>Genetic risks are usually computed under the assumption that genetic parameters, such as the recombination fraction, are known without error. Uncertainty in the <span class="hlt">estimates</span> of these parameters must translate into uncertainty regarding the risk. To allow for uncertainties in parameter values, one may employ Bayesian techniques or, in a maximum-<span class="hlt">likelihood</span> framework, construct a support interval (SI) for the risk. Here the authors have implemented the latter approach. The SI for the risk is based on the SIs of parameters involved in the pedigree <span class="hlt">likelihood</span>. As an empirical example, the SI for the risk was calculated for probands who are members of chronic spinal muscular atrophy kindreds. In order to evaluate the accuracy of a risk in genetic counseling situations, the authors advocate that, in addition to a point <span class="hlt">estimate</span>, an SI for the risk should be calculated. 16 refs., 1 fig., 1 tab.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24223450','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24223450"><span>A composite <span class="hlt">likelihood</span> approach for spatially correlated survival data.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Paik, Jane; Ying, Zhiliang</p> <p>2013-01-01</p> <p>The aim of this paper is to provide a composite <span class="hlt">likelihood</span> approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For <span class="hlt">estimation</span> of the dependence parameters, we present pairwise composite <span class="hlt">likelihood</span> equations. We prove that the resulting <span class="hlt">estimators</span> exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3819148','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3819148"><span>A composite <span class="hlt">likelihood</span> approach for spatially correlated survival data</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Paik, Jane; Ying, Zhiliang</p> <p>2013-01-01</p> <p>The aim of this paper is to provide a composite <span class="hlt">likelihood</span> approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For <span class="hlt">estimation</span> of the dependence parameters, we present pairwise composite <span class="hlt">likelihood</span> equations. We prove that the resulting <span class="hlt">estimators</span> exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20370021','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20370021"><span>Physically constrained maximum <span class="hlt">likelihood</span> mode filtering.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Papp, Joseph C; Preisig, James C; Morozov, Andrey K</p> <p>2010-04-01</p> <p>Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum <span class="hlt">likelihood</span> (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as <span class="hlt">estimated</span> by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19802375','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19802375"><span>Maximum <span class="hlt">Likelihood</span> Inference for the Cox Regression Model with Applications to Missing Covariates.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man</p> <p>2009-10-01</p> <p>In this paper, we carry out an in-depth theoretical investigation for existence of maximum <span class="hlt">likelihood</span> <span class="hlt">estimates</span> for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to <span class="hlt">estimate</span> with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial <span class="hlt">likelihood</span> <span class="hlt">estimate</span> (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum <span class="hlt">likelihood</span> <span class="hlt">estimate</span> (MLE) for survival data with missing covariates via a profile <span class="hlt">likelihood</span> method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900058279&hterms=aircraft+equations+motion&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Daircraft%2Bequations%2Bmotion','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900058279&hterms=aircraft+equations+motion&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Daircraft%2Bequations%2Bmotion"><span>Maximum <span class="hlt">likelihood</span> tuning of a vehicle motion filter</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Trankle, Thomas L.; Rabin, Uri H.</p> <p>1990-01-01</p> <p>This paper describes the use of maximum <span class="hlt">likelihood</span> parameter <span class="hlt">estimation</span> unknown parameters appearing in a nonlinear vehicle motion filter. The filter uses the kinematic equations of motion of a rigid body in motion over a spherical earth. The nine states of the filter represent vehicle velocity, attitude, and position. The inputs to the filter are three components of translational acceleration and three components of angular rate. Measurements used to update states include air data, altitude, position, and attitude. Expressions are derived for the elements of filter matrices needed to use air data in a body-fixed frame with filter states expressed in a geographic frame. An expression for the <span class="hlt">likelihood</span> functions of the data is given, along with accurate approximations for the function's gradient and Hessian with respect to unknown parameters. These are used by a numerical quasi-Newton algorithm for maximizing the <span class="hlt">likelihood</span> function of the data in order to <span class="hlt">estimate</span> the unknown parameters. The parameter <span class="hlt">estimation</span> algorithm is useful for processing data from aircraft flight tests or for tuning inertial navigation systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7819E..0IM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7819E..0IM"><span><span class="hlt">Likelihood</span> of methane-producing microbes on Mars</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miller, Joseph D.; Case, Marianne J.; Straat, Patricia Ann; Levin, Gilbert V.</p> <p>2010-09-01</p> <p>We present a <span class="hlt">likelihood</span> <span class="hlt">estimate</span> that methane was a significant component of the gas detected by the Labeled Release (LR) experiment in the Viking Mission to Mars of 1976. In comparison with terrestrial methanogen production of methane we <span class="hlt">estimate</span> the size of the putative microbe population necessary to produce the LR gas, had it been primarily methane. We extrapolate that figure to <span class="hlt">estimate</span> the number of methanogens necessary to produce the methane content of the Martian atmosphere. Next, we <span class="hlt">estimate</span> the amount of Martian soil and the amount of water needed for that global population of microbes. Finally, assuming a globally distributed population of such microbes, we <span class="hlt">estimate</span> the likely sub-surface depth at which such methanogens could be detected.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19960016406&hterms=uros&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Duros','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19960016406&hterms=uros&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Duros"><span>Maximum-<span class="hlt">likelihood</span> analysis of the COBE angular correlation function</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Seljak, Uros; Bertschinger, Edmund</p> <p>1993-01-01</p> <p>We have used maximum-<span class="hlt">likelihood</span> <span class="hlt">estimation</span> to determine the quadrupole amplitude Q(sub rms-PS) and the spectral index n of the density fluctuation power spectrum at recombination from the COBE DMR data. We find a strong correlation between the two parameters of the form Q(sub rms-PS) = (15.7 +/- 2.6) exp (0.46(1 - n)) microK for fixed n. Our result is slightly smaller than and has a smaller statistical uncertainty than the 1992 <span class="hlt">estimate</span> of Smoot et al.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4514821','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4514821"><span>Efficient maximum <span class="hlt">likelihood</span> parameterization of continuous-time Markov processes</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>McGibbon, Robert T.; Pande, Vijay S.</p> <p>2015-01-01</p> <p>Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum <span class="hlt">likelihood</span> <span class="hlt">estimator</span> for constructing such models from data observed at a finite time interval. This <span class="hlt">estimator</span> is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2610021','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2610021"><span>Censored Median Regression and Profile Empirical <span class="hlt">Likelihood</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Subramanian, Sundarraman</p> <p>2007-01-01</p> <p>We implement profile empirical <span class="hlt">likelihood</span> based inference for censored median regression models. Inference for any specified sub-vector is carried out by profiling out the nuisance parameters from the “plug-in” empirical <span class="hlt">likelihood</span> ratio function proposed by Qin and Tsao. To obtain the critical value of the profile empirical <span class="hlt">likelihood</span> ratio statistic, we first investigate its asymptotic distribution. The limiting distribution is a sum of weighted chi square distributions. Unlike for the full empirical <span class="hlt">likelihood</span>, however, the derived asymptotic distribution has intractable covariance structure. Therefore, we employ the bootstrap to obtain the critical value, and compare the resulting confidence intervals with the ones obtained through Basawa and Koul’s minimum dispersion statistic. Furthermore, we obtain confidence intervals for the age and treatment effects in a lung cancer data set. PMID:19112527</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2247540','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2247540"><span>Maximum <span class="hlt">likelihood</span> training of connectionist models: comparison with least squares back-propagation and logistic regression.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Spackman, K. A.</p> <p>1991-01-01</p> <p>This paper presents maximum <span class="hlt">likelihood</span> back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares <span class="hlt">estimation</span> does not give a maximum <span class="hlt">likelihood</span> (ML) <span class="hlt">estimate</span> of the weights in the network. Logistic regression, on the other hand, gives ML <span class="hlt">estimates</span> for single layer linear models only. This report describes how to obtain ML <span class="hlt">estimates</span> of the weights in a multi-layer model, and compares LS-BP to ML-BP using several examples. It shows that in many neural networks, least squares <span class="hlt">estimation</span> gives inferior results and should be abandoned in favor of maximum <span class="hlt">likelihood</span> <span class="hlt">estimation</span>. Questions remain about the potential uses of multi-level connectionist models in such areas as diagnostic systems and risk-stratification in outcomes research. PMID:1807606</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26167984','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26167984"><span>Exact <span class="hlt">likelihood</span>-free Markov chain Monte Carlo for elliptically contoured distributions.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Muchmore, Patrick; Marjoram, Paul</p> <p>2015-08-01</p> <p>Recent results in Markov chain Monte Carlo (MCMC) show that a chain based on an unbiased <span class="hlt">estimator</span> of the <span class="hlt">likelihood</span> can have a stationary distribution identical to that of a chain based on exact <span class="hlt">likelihood</span> calculations. In this paper we develop such an <span class="hlt">estimator</span> for elliptically contoured distributions, a large family of distributions that includes and generalizes the multivariate normal. We then show how this <span class="hlt">estimator</span>, combined with pseudorandom realizations of an elliptically contoured distribution, can be used to run MCMC in a way that replicates the stationary distribution of a <span class="hlt">likelihood</span> based chain, but does not require explicit <span class="hlt">likelihood</span> calculations. Because many elliptically contoured distributions do not have closed form densities, our simulation based approach enables exact MCMC based inference in a range of cases where previously it was impossible.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4607478','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4607478"><span>Exact <span class="hlt">Likelihood</span>-free Markov Chain Monte Carlo for Elliptically Contoured Distributions</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Marjoram, Paul</p> <p>2015-01-01</p> <p>Recent results in Markov chain Monte Carlo (MCMC) show that a chain based on an unbiased <span class="hlt">estimator</span> of the <span class="hlt">likelihood</span> can have a stationary distribution identical to that of a chain based on exact <span class="hlt">likelihood</span> calculations. In this paper we develop such an <span class="hlt">estimator</span> for elliptically contoured distributions, a large family of distributions that includes and generalizes the multivariate normal. We then show how this <span class="hlt">estimator</span>, combined with pseudorandom realizations of an elliptically contoured distribution, can be used to run MCMC in a way that replicates the stationary distribution of a <span class="hlt">likelihood</span> based chain, but does not require explicit <span class="hlt">likelihood</span> calculations. Because many elliptically contoured distributions do not have closed form densities, our simulation based approach enables exact MCMC based inference in a range of cases where previously it was impossible. PMID:26167984</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930000380&hterms=CPM&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DCPM','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930000380&hterms=CPM&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DCPM"><span>Maximum-<span class="hlt">Likelihood</span> Detection Of Noncoherent CPM</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Divsalar, Dariush; Simon, Marvin K.</p> <p>1993-01-01</p> <p>Simplified detectors proposed for use in maximum-<span class="hlt">likelihood</span>-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-<span class="hlt">likelihood</span> metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24855045','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24855045"><span>Monocular distance <span class="hlt">estimation</span> from optic flow during <span class="hlt">active</span> landing maneuvers.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van Breugel, Floris; Morgansen, Kristi; Dickinson, Michael H</p> <p>2014-06-01</p> <p>Vision is arguably the most widely used sensor for position and velocity <span class="hlt">estimation</span> in animals, and it is increasingly used in robotic systems as well. Many animals use stereopsis and object recognition in order to make a true <span class="hlt">estimate</span> of distance. For a tiny insect such as a fruit fly or honeybee, however, these methods fall short. Instead, an insect must rely on calculations of optic flow, which can provide a measure of the ratio of velocity to distance, but not either parameter independently. Nevertheless, flies and other insects are adept at landing on a variety of substrates, a behavior that inherently requires some form of distance <span class="hlt">estimation</span> in order to trigger distance-appropriate motor actions such as deceleration or leg extension. Previous studies have shown that these behaviors are indeed under visual control, raising the question: how does an insect <span class="hlt">estimate</span> distance solely using optic flow? In this paper we use a nonlinear control theoretic approach to propose a solution for this problem. Our algorithm takes advantage of visually controlled landing trajectories that have been observed in flies and honeybees. Finally, we implement our algorithm, which we term dynamic peering, using a camera mounted to a linear stage to demonstrate its real-world feasibility.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23296487','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23296487"><span>Corrected profile <span class="hlt">likelihood</span> confidence interval for binomial paired incomplete data.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pradhan, Vivek; Menon, Sandeep; Das, Ujjwal</p> <p>2013-01-01</p> <p>Clinical trials often use paired binomial data as their clinical endpoint. The confidence interval is frequently used to <span class="hlt">estimate</span> the treatment performance. Tang et al. (2009) have proposed exact and approximate unconditional methods for constructing a confidence interval in the presence of incomplete paired binary data. The approach proposed by Tang et al. can be overly conservative with large expected confidence interval width (ECIW) in some situations. We propose a profile <span class="hlt">likelihood</span>-based method with a Jeffreys' prior correction to construct the confidence interval. This approach generates confidence interval with a much better coverage probability and shorter ECIWs. The performances of the method along with the corrections are demonstrated through extensive simulation. Finally, three real world data sets are analyzed by all the methods. Statistical Analysis System (SAS) codes to execute the profile <span class="hlt">likelihood</span>-based methods are also presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24117216','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24117216"><span>Hybrid pairwise <span class="hlt">likelihood</span> analysis of animal behavior experiments.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cattelan, Manuela; Varin, Cristiano</p> <p>2013-12-01</p> <p>The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise <span class="hlt">likelihood</span> method that iterates between optimal <span class="hlt">estimating</span> equations for the regression parameters and pairwise <span class="hlt">likelihood</span> inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880029890&hterms=classification+algorithm&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dclassification%2Balgorithm','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880029890&hterms=classification+algorithm&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dclassification%2Balgorithm"><span>Gaussian maximum <span class="hlt">likelihood</span> and contextual classification algorithms for multicrop classification</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.</p> <p>1987-01-01</p> <p>The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the <span class="hlt">likelihoods</span> provided by the Gaussian maximum <span class="hlt">likelihood</span> classifier (to be used as initial probability <span class="hlt">estimates</span> to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=behavioral+AND+science&pg=6&id=EJ1012388','ERIC'); return false;" href="http://eric.ed.gov/?q=behavioral+AND+science&pg=6&id=EJ1012388"><span>Implementing Restricted Maximum <span class="hlt">Likelihood</span> <span class="hlt">Estimation</span> in Structural Equation Models</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Cheung, Mike W.-L.</p> <p>2013-01-01</p> <p>Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080048200','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080048200"><span>Monte Carlo Simulation to <span class="hlt">Estimate</span> <span class="hlt">Likelihood</span> of Direct Lightning Strikes</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mata, Carlos; Medelius, Pedro</p> <p>2008-01-01</p> <p>A software tool has been designed to quantify the lightning exposure at launch sites of the stack at the pads under different configurations. In order to predict lightning strikes to generic structures, this model uses leaders whose origins (in the x-y plane) are obtained from a 2D random, normal distribution.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA038955','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA038955"><span>Maximum <span class="hlt">Likelihood</span> <span class="hlt">Estimation</span> of Multivariate Autoregressive-Moving Average Models.</span></a></p> <p><a target="_blank" href="https://publicaccess.dtic.mil/psm/api/service/search/search">DTIC Science & Technology</a></p> <p></p> <p>1977-02-01</p> <p>maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA145775','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA145775"><span>Maximum <span class="hlt">Likelihood</span> and Bayesian Parameter <span class="hlt">Estimation</span> in Item Response Theory.</span></a></p> <p><a target="_blank" href="https://publicaccess.dtic.mil/psm/api/service/search/search">DTIC Science & Technology</a></p> <p></p> <p>1984-08-01</p> <p>OR0 UNIT NUNUE-RS Educational Testing Service NR 150-520 Princeton, NJ 08541 I. CONTROLLING OPFrICE NAME AND ADDRESS t. REPORT DATE Personnel and...Bejar Office of Personnel Managesent Educational Testing Service 1900 E Street NW Princeton, N! 08450 Washington, DC 20415 I Dr. Merucha Birenbaue I</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23227271','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23227271"><span>Effect of Magnitude <span class="hlt">Estimation</span> of Pleasantness and Intensity on fMRI <span class="hlt">Activation</span> to Taste.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cerf-Ducastel, B; Haase, L; Murphy, C</p> <p>2012-03-01</p> <p>The goal of the present study was to investigate whether the psychophysical evaluation of taste stimuli using magnitude <span class="hlt">estimation</span> influences the pattern of cortical <span class="hlt">activation</span> observed with neuroimaging. That is, whether different brain areas are involved in the magnitude <span class="hlt">estimation</span> of pleasantness relative to the magnitude <span class="hlt">estimation</span> of intensity. fMRI was utilized to examine the patterns of cortical <span class="hlt">activation</span> involved in magnitude <span class="hlt">estimation</span> of pleasantness and intensity during hunger in response to taste stimuli. During scanning, subjects were administered taste stimuli orally and were asked to evaluate the perceived pleasantness or intensity using the general Labeled Magnitude Scale (Green 1996, Bartoshuk et al. 2004). Image analysis was conducted using AFNI. Magnitude <span class="hlt">estimation</span> of intensity and pleasantness shared common <span class="hlt">activations</span> in the insula, rolandic operculum, and the medio dorsal nucleus of the thalamus. Globally, magnitude <span class="hlt">estimation</span> of pleasantness produced significantly more <span class="hlt">activation</span> than magnitude <span class="hlt">estimation</span> of intensity. Areas differentially <span class="hlt">activated</span> during magnitude <span class="hlt">estimation</span> of pleasantness versus intensity included, e.g., the insula, the anterior cingulate gyrus, and putamen; suggesting that different brain areas were recruited when subjects made magnitude <span class="hlt">estimates</span> of intensity and pleasantness. These findings demonstrate significant differences in brain <span class="hlt">activation</span> during magnitude <span class="hlt">estimation</span> of intensity and pleasantness to taste stimuli. An appreciation for the complexity of brain response to taste stimuli may facilitate a clearer understanding of the neural mechanisms underlying eating behavior and over consumption.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IzAOP..52.1137Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IzAOP..52.1137Y"><span>Variant for <span class="hlt">estimating</span> the <span class="hlt">activity</span> of tropical cyclone groups in the world ocean</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yaroshevich, M. I.</p> <p>2016-12-01</p> <p>It is especially important to know the character and the intensity level of tropical cyclone (TC) <span class="hlt">activity</span> when the system for <span class="hlt">estimating</span> the cyclonic danger and risk is formed. During seasons of increased cyclonic <span class="hlt">activity</span>, when several TCs are simultaneously <span class="hlt">active</span>, the total energy effect of the cyclone group joint action is not <span class="hlt">estimated</span> numerically. Cyclonic <span class="hlt">activity</span> is as a rule characterized by the number of TCs that occur in the considered zone. A variant of the criterion, according to which relative cyclonic <span class="hlt">activity</span> is <span class="hlt">estimated</span>, is presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=225804','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=225804"><span>Using an <span class="hlt">Active</span> Sensor to <span class="hlt">Estimate</span> Orchard Grass (Dactylis glomerata L.) Dry Matter Yield and Quality</span></a></p> <p><a target="_blank" href="http://www.ars.usda.gov/services/TekTran.htm">Technology Transfer Automated Retrieval System (TEKTRAN)</a></p> <p></p> <p></p> <p>Remote sensing in the form of <span class="hlt">active</span> sensors could be used to <span class="hlt">estimate</span> forage biomass on spatial and temporal scales. The objective of this study is to use canopy reflectance measurements from an <span class="hlt">active</span> remote sensor to compare different vegetation indices as a means of <span class="hlt">estimating</span> final dry matter y...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/975971','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/975971"><span><span class="hlt">Estimating</span> discharged plutonium using measurements of structural material <span class="hlt">activation</span> products</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Charlton, W. S.; Lumley-Woodyear, A. de; Budlong-Sylvester, K. W.</p> <p>2002-01-01</p> <p>As the US and Russia move to lower numbers of deployed nuclear weapons, transparency regarding the quantity of weapons usable fissile material available in each country may become more important. In some cases detailed historical information regarding material production at individual facilities may be incomplete or not readily available, e.g., at decommissioned facilities. In such cases tools may be needed to produce <span class="hlt">estimates</span> of aggregate material production as part of a bilateral agreement. Such measurement techniques could also provide increased confidence in declared production quantities.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4442039','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4442039"><span>Using Skin Sympathetic Nerve <span class="hlt">Activity</span> to <span class="hlt">Estimate</span> Stellate Ganglion Nerve <span class="hlt">Activity</span> in Dogs</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jiang, Zhaolei; Zhao, Ye; Doytchinova, Anisiia; Kamp, Nicholas J.; Tsai, Wei-Chung; Yuan, Yuan; Adams, David; Wagner, David; Shen, Changyu; Chen, Lan S.; Everett, Thomas H.; Lin, Shien-Fong; Chen, Peng-Sheng</p> <p>2015-01-01</p> <p>Background Stellate ganglion nerve <span class="hlt">activity</span> (SGNA) is important in cardiac arrhythmogenesis. However, direct recording of SGNA requires access to the thoracic cavity. Skin of upper thorax is innervated by sympathetic nerve fibers originating from the stellate ganglia (SG) and is easily accessible. Objective To test the hypothesis that thoracic skin nerve <span class="hlt">activity</span> (SKNA) can be used to <span class="hlt">estimate</span> SGNA. Methods We recorded SGNA and SKNAs using surface electrocardiogram leads in 5 anesthetized and 4 ambulatory dogs. Apamin injected into the right SG abruptly increased both right SGNA and SKNA in 5 anesthetized dogs. We integrated nerve <span class="hlt">activities</span> and averaged heart rate in each one-min window over 10 min. We implanted a radiotransmitter to record left SGNA in 4 ambulatory dogs, including two normal dogs, one dog with myocardial infarction and one dog with intermittent rapid atrial pacing. After 2 weeks of recovery, we simultaneously recorded the SKNA and left SGNA continuously for 30 min when the dogs were ambulatory. Results There was a positive correlation (average r=0.877, 95% confidence interval (CI) 0.732 to 1.000, p<0.05 for each dog) between integrated SKNA (iSKNA) and SGNA (iSGNA) and between iSKNA and heart rate (average r=0.837, 95% CI 0.752 to 0.923, p<0.05). Similar to that found in the anesthetized dogs, there was a positive correlation (average r=0.746, 95% CI 0.527 to 0.964, p<0.05) between iSKNA and iSGNA and between iSKNA and heart rate (average r=0.706, 95% CI 0.484 to 0.927, p<0.05). Conclusions SKNAs can be used to <span class="hlt">estimate</span> SGNA in dogs. PMID:25681792</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015GeoJI.201.1409K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015GeoJI.201.1409K"><span><span class="hlt">Likelihood</span> analysis of earthquake focal mechanism distributions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kagan, Yan Y.; Jackson, David D.</p> <p>2015-06-01</p> <p>In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad hoc, empirical assumptions, thus their performance is questionable. We apply a conventional <span class="hlt">likelihood</span> method to measure the skill of earthquake focal mechanism orientation forecasts. The advantage of such an approach is that earthquake rate prediction can be adequately combined with focal mechanism forecast, if both are based on the <span class="hlt">likelihood</span> scores, resulting in a general forecast optimization. We measure the difference between two double-couple sources as the minimum rotation angle that transforms one into the other. We measure the uncertainty of a focal mechanism forecast (the variability), and the difference between observed and forecasted orientations (the prediction error), in terms of these minimum rotation angles. To calculate the <span class="hlt">likelihood</span> score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random (or equally probable). For 3-D rotation the random rotation angle distribution is not uniform. To better understand the resulting complexities, we calculate the information (<span class="hlt">likelihood</span>) score for two theoretical rotational distributions (Cauchy and von Mises-Fisher), which are used to approximate earthquake source orientation pattern. We then calculate the <span class="hlt">likelihood</span> score for earthquake source forecasts and for their validation by future seismicity data. Several issues need to be explored when analyzing observational results: their dependence on forecast and data resolution, internal dependence of scores on forecasted angle and random variability of <span class="hlt">likelihood</span> scores. Here, we propose a simple tentative solution but extensive theoretical and statistical analysis is needed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27879417','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27879417"><span>Factors Influencing <span class="hlt">Likelihood</span> of Voice Therapy Attendance.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Misono, Stephanie; Marmor, Schelomo; Roy, Nelson; Mau, Ted; Cohen, Seth M</p> <p>2017-03-01</p> <p>Objective To identify factors associated with the <span class="hlt">likelihood</span> of attending voice therapy among patients referred for it in the CHEER (Creating Healthcare Excellence through Education and Research) practice-based research network infrastructure. Study Design Prospectively enrolled cross-sectional study. Setting CHEER network of community and academic sites. Methods Data were collected on patient-reported demographics, voice-related diagnoses, voice-related handicap (Voice Handicap Index-10), <span class="hlt">likelihood</span> of attending voice therapy (VT), and opinions on factors influencing <span class="hlt">likelihood</span> of attending VT. The relationships between patient characteristics/opinions and <span class="hlt">likelihood</span> of attending VT were investigated. Results A total of 170 patients with various voice-related diagnoses reported receiving a recommendation for VT. Of those, 85% indicated that they were likely to attend it, regardless of voice-related handicap severity. The most common factors influencing <span class="hlt">likelihood</span> of VT attendance were insurance/copay, relief that it was not cancer, and travel. Those who were not likely to attend VT identified, as important factors, unclear potential improvement, not understanding the purpose of therapy, and concern that it would be too hard. In multivariate analysis, factors associated with greater <span class="hlt">likelihood</span> of attending VT included shorter travel distance, age (40-59 years), and being seen in an academic practice. Conclusions Most patients reported plans to attend VT as recommended. Patients who intended to attend VT reported different considerations in their decision making from those who did not plan to attend. These findings may inform patient counseling and efforts to increase access to voice care.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27647436','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27647436"><span>Empirical <span class="hlt">likelihood</span> method for non-ignorable missing data problems.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guan, Zhong; Qin, Jing</p> <p>2017-01-01</p> <p>Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical <span class="hlt">likelihood</span> method we can obtain the constrained maximum empirical <span class="hlt">likelihood</span> <span class="hlt">estimators</span> of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the <span class="hlt">likelihood</span> ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Body+AND+temperature&pg=7&id=ED241254','ERIC'); return false;" href="http://eric.ed.gov/?q=Body+AND+temperature&pg=7&id=ED241254"><span>"Help Wanted, Inquire Within": <span class="hlt">Estimation</span>. <span class="hlt">Activities</span> and Thoughts That Emphasize Dealing Sensibly with Numbers through the Processes of <span class="hlt">Estimation</span>. (Grades 1-6). Title I Elementary Mathematics Program.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gronert, Joie; Marshall, Sally</p> <p></p> <p>Developed for elementary teachers, this <span class="hlt">activity</span> unit is designed to teach students the importance of <span class="hlt">estimation</span> in developing quantitative thinking. Nine ways in which <span class="hlt">estimation</span> is useful to students are listed, and five general guidelines are offered to the teacher for planning <span class="hlt">estimation</span> <span class="hlt">activities</span>. Specific guidelines are provided for…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA230259','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA230259"><span>Cramer-Rao Bound, MUSIC, and Maximum <span class="hlt">Likelihood</span>. Effects of Temporal Phase Difference</span></a></p> <p><a target="_blank" href="https://publicaccess.dtic.mil/psm/api/service/search/search">DTIC Science & Technology</a></p> <p></p> <p>1990-11-01</p> <p>Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum <span class="hlt">Likelihood</span> Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum <span class="hlt">Likelihood</span> (ML) asymptotic variances corresponding to the two-source direction-of-arrival <span class="hlt">estimation</span> where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950010869','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950010869"><span>Closed-loop carrier phase synchronization techniques motivated by <span class="hlt">likelihood</span> functions</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tsou, H.; Hinedi, S.; Simon, M.</p> <p>1994-01-01</p> <p>This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase <span class="hlt">estimation</span> with emphasis on the development of new structures based on both maximum-<span class="hlt">likelihood</span> and average-<span class="hlt">likelihood</span> functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/6568797','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/6568797"><span><span class="hlt">Estimating</span> tail probabilities</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Carr, D.B.; Tolley, H.D.</p> <p>1982-12-01</p> <p>This paper investigates procedures for univariate nonparametric <span class="hlt">estimation</span> of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several <span class="hlt">estimators</span> which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted <span class="hlt">estimators</span>, to the empirical cdf, to an integrated kernel, to a Fourier series <span class="hlt">estimate</span>, to a penalized <span class="hlt">likelihood</span> <span class="hlt">estimate</span> and a maximum <span class="hlt">likelihood</span> <span class="hlt">estimate</span>. Selected weighted <span class="hlt">estimators</span> are shown to compare favorably to many of these standard <span class="hlt">estimators</span> for the sampling distributions investigated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009EGUGA..11.8331G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009EGUGA..11.8331G"><span>HALM: A Hybrid Asperity <span class="hlt">Likelihood</span> Model for Italy</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gulia, L.; Wiemer, S.</p> <p>2009-04-01</p> <p>The Asperity <span class="hlt">Likelihood</span> Model (ALM), first developed and currently tested for California, hypothesizes that small-scale spatial variations in the b-value of the Gutenberg and Richter relationship play a central role in forecasting future seismicity (Wiemer and Schorlemmer, SRL, 2007). The physical basis of the model is the concept that the local b-value is inversely dependent on applied shear stress. Thus low b-values (b < 0.7) characterize the locked paches of faults -asperities- from which future mainshocks are more likely to be generated, whereas the high b-values (b > 1.1) found for example in creeping section of faults suggest a lower seismic hazard. To test this model in a reproducible and prospective way suitable for the requirements of the CSEP initiative (www.cseptesting.org), the b-value variability is mapped on a grid. First, using the entire dataset above the overall magnitude of completeness, the regional b-value is <span class="hlt">estimated</span>. This value is then compared to the one locally <span class="hlt">estimated</span> at each grid-node for a number of radii, we use the local value if its <span class="hlt">likelihood</span> score, corrected for the degrees of freedom using the Akaike Information Criterion, suggest to do so. We are currently calibrating the ALM model for implementation in the Italian testing region, the first region within the CSEP EU testing Center (eu.cseptesting.org) for which fully prospective tests of earthquake <span class="hlt">likelihood</span> models will commence in Europe. We are also developing a modified approach, ‘hybrid' between a grid-based and a zoning one: the HALM (Hybrid Asperity <span class="hlt">Likelihood</span> Model). According to HALM, the Italian territory is divided in three distinct regions depending on the main tectonic elements, combined with knowledge derived from GPS networks, seismic profile interpretation, borehole breakouts and the focal mechanisms of the event. The local b-value variability was thus mapped using three independent overall b-values. We evaluate the performance of the two models in</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/1326627','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/1326627"><span>A 3D approximate maximum <span class="hlt">likelihood</span> localization solver</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p></p> <p>2016-09-23</p> <p>A robust three-dimensional solver was needed to accurately and efficiently <span class="hlt">estimate</span> the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum <span class="hlt">likelihood</span> solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25189699','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25189699"><span>Zero-inflated Poisson model based <span class="hlt">likelihood</span> ratio test for drug safety signal detection.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram</p> <p>2017-02-01</p> <p>In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based <span class="hlt">likelihood</span> ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum <span class="hlt">likelihood</span> <span class="hlt">estimates</span> of the model parameters of zero-inflated Poisson model based <span class="hlt">likelihood</span> ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based <span class="hlt">likelihood</span> ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based <span class="hlt">likelihood</span> ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based <span class="hlt">likelihood</span> ratio test method performs similar to Poisson model based <span class="hlt">likelihood</span> ratio test method when the <span class="hlt">estimated</span> percentage of true zeros in the database is small. Both the zero-inflated Poisson model based <span class="hlt">likelihood</span> ratio test and <span class="hlt">likelihood</span> ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/812144','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/812144"><span>Numerical <span class="hlt">likelihood</span> analysis of cosmic ray anisotropies</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Carlos Hojvat et al.</p> <p>2003-07-02</p> <p>A numerical <span class="hlt">likelihood</span> approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EL....11228003C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EL....11228003C"><span>Growing local <span class="hlt">likelihood</span> network: Emergence of communities</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, S.; Small, M.</p> <p>2015-10-01</p> <p>In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a <span class="hlt">likelihood</span>-based method, where at each step we modify the networks by changing their <span class="hlt">likelihood</span> to be closer to the expected degree distribution. Combining the local information and the <span class="hlt">likelihood</span> method, we grow networks that exhibit novel features. We discover that the <span class="hlt">likelihood</span> method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=model+AND+missing+AND+data&pg=6&id=EJ1108990','ERIC'); return false;" href="http://eric.ed.gov/?q=model+AND+missing+AND+data&pg=6&id=EJ1108990"><span>Synthesizing Regression Results: A Factored <span class="hlt">Likelihood</span> Method</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wu, Meng-Jia; Becker, Betsy Jane</p> <p>2013-01-01</p> <p>Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored <span class="hlt">likelihood</span> method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=%22CONSUMERS+--+Psychology%22&id=EJ469888','ERIC'); return false;" href="http://eric.ed.gov/?q=%22CONSUMERS+--+Psychology%22&id=EJ469888"><span>A Maximum <span class="hlt">Likelihood</span> Method for Latent Class Regression Involving a Censored Dependent Variable.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Jedidi, Kamel; And Others</p> <p>1993-01-01</p> <p>A method is proposed to simultaneously <span class="hlt">estimate</span> regression functions and subject membership in "k" latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum <span class="hlt">likelihood</span> <span class="hlt">estimates</span> are obtained using an EM algorithm. The method is illustrated through a consumer psychology application. (SLD)</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23287520','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23287520"><span>Methodology for a bounding <span class="hlt">estimate</span> of <span class="hlt">activation</span> source-term.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Culp, Todd</p> <p>2013-02-01</p> <p>Sandia National Laboratories' Z-Machine is the world's most powerful electrical device, and experiments have been conducted that make it the world's most powerful radiation source. Because Z-Machine is used for research, an assortment of materials can be placed into the machine; these materials can be subjected to a range of nuclear reactions, producing an assortment of <span class="hlt">activation</span> products. A methodology was developed to provide a systematic approach to evaluate different materials to be introduced into the machine as wire arrays. This methodology is based on experiment specific characteristics, physical characteristics of specific radionuclides, and experience with Z-Machine. This provides a starting point for bounding calculations of radionuclide source-term that can be used for work planning, development of work controls, and evaluating materials for introduction into the machine.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011ITEIS.131.1416O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011ITEIS.131.1416O"><span><span class="hlt">Estimation</span> of Exercise Intensity in “Exercise and Physical <span class="hlt">Activity</span> Reference for Health Promotion”</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ohkubo, Tomoyuki; Kurihara, Yosuke; Kobayashi, Kazuyuki; Watanabe, Kajiro</p> <p></p> <p>To maintain or promote the health condition of elderly citizens is quite important for Japan. Given the circumstances, the Ministry of Health, Labour and Welfare has established the standards for the <span class="hlt">activities</span> and exercises for promoting the health, and quantitatively determined the exercise intensity on 107 items of <span class="hlt">activities</span>. This exercise intensity, however, requires recording the type and the duration of the <span class="hlt">activity</span> to be calculated. In this paper, the exercise intensities are <span class="hlt">estimated</span> using 3D accelerometer for 25 daily <span class="hlt">activities</span>. As the result, the exercise intensities were <span class="hlt">estimated</span> to be within the root mean square error of 0.83 METs for all 25 <span class="hlt">activities</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=max+OR+max+OR+max&id=EJ934662','ERIC'); return false;" href="http://eric.ed.gov/?q=max+OR+max+OR+max&id=EJ934662"><span>Non-Exercise <span class="hlt">Estimation</span> of VO[subscript 2]max Using the International Physical <span class="hlt">Activity</span> Questionnaire</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Schembre, Susan M.; Riebe, Deborah A.</p> <p>2011-01-01</p> <p>Non-exercise equations developed from self-reported physical <span class="hlt">activity</span> can <span class="hlt">estimate</span> maximal oxygen uptake (VO[subscript 2]max) as well as sub-maximal exercise testing. The International Physical <span class="hlt">Activity</span> Questionnaire is the most widely used and validated self-report measure of physical <span class="hlt">activity</span>. This study aimed to develop and test a VO[subscript…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=231469&keyword=second+AND+starting&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50&CFID=88685696&CFTOKEN=74211484','EPA-EIMS'); return false;" href="http://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=231469&keyword=second+AND+starting&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50&CFID=88685696&CFTOKEN=74211484"><span><span class="hlt">Estimating</span> Toxicity Pathway <span class="hlt">Activating</span> Doses for High Throughput Chemical Risk Assessments</span></a></p> <p><a target="_blank" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p><span class="hlt">Estimating</span> a Toxicity Pathway <span class="hlt">Activating</span> Dose (TPAD) from in vitro assays as an analog to a reference dose (RfD) derived from in vivo toxicity tests would facilitate high throughput risk assessments of thousands of data-poor environmental chemicals. <span class="hlt">Estimating</span> a TPAD requires def...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JPhCS.762a2034C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JPhCS.762a2034C"><span>Experiments using machine learning to approximate <span class="hlt">likelihood</span> ratios for mixture models</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cranmer, K.; Pavez, J.; Louppe, G.; Brooks, W. K.</p> <p>2016-10-01</p> <p><span class="hlt">Likelihood</span> ratio tests are a key tool in many fields of science. In order to evaluate the <span class="hlt">likelihood</span> ratio the <span class="hlt">likelihood</span> function is needed. However, it is common in fields such as High Energy Physics to have complex simulations that describe the distribution while not having a description of the <span class="hlt">likelihood</span> that can be directly evaluated. In this setting it is impossible or computationally expensive to evaluate the <span class="hlt">likelihood</span>. It is, however, possible to construct an equivalent version of the <span class="hlt">likelihood</span> ratio that can be evaluated by using discriminative classifiers. We show how this can be used to approximate the <span class="hlt">likelihood</span> ratio when the underlying distribution is a weighted sum of probability distributions (e.g. signal plus background model). We demonstrate how the results can be considerably improved by decomposing the ratio and use a set of classifiers in a pairwise manner on the components of the mixture model and how this can be used to <span class="hlt">estimate</span> the unknown coefficients of the model, such as the signal contribution.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18561679','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18561679"><span>Improved <span class="hlt">activity</span> <span class="hlt">estimation</span> with MC-JOSEM versus TEW-JOSEM in 111In SPECT.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ouyang, Jinsong; El Fakhri, Georges; Moore, Stephen C</p> <p>2008-05-01</p> <p>We have previously developed a fast Monte Carlo (MC)-based joint ordered-subset expectation maximization (JOSEM) iterative reconstruction algorithm, MC-JOSEM. A phantom study was performed to compare quantitative imaging performance of MC-JOSEM with that of a triple-energy-window approach (TEW) in which <span class="hlt">estimated</span> scatter was also included additively within JOSEM, TEW-JOSEM. We acquired high-count projections of a 5.5 cm3 sphere of 111In at different locations in the water-filled torso phantom; high-count projections were then obtained with 111In only in the liver or only in the soft-tissue background compartment, so that we could generate synthetic projections for spheres surrounded by various <span class="hlt">activity</span> distributions. MC scatter <span class="hlt">estimates</span> used by MC-JOSEM were computed once after five iterations of TEW-JOSEM. Images of different combinations of liver/background and sphere/background <span class="hlt">activity</span> concentration ratios were reconstructed by both TEW-JOSEM and MC-JOSEM for 40 iterations. For <span class="hlt">activity</span> <span class="hlt">estimation</span> in the sphere, MC-JOSEM always produced better relative bias and relative standard deviation than TEW-JOSEM for each sphere location, iteration number, and <span class="hlt">activity</span> combination. The average relative bias of <span class="hlt">activity</span> <span class="hlt">estimates</span> in the sphere for MC-JOSEM after 40 iterations was -6.9%, versus -15.8% for TEW-JOSEM, while the average relative standard deviation of the sphere <span class="hlt">activity</span> <span class="hlt">estimates</span> was 16.1% for MC-JOSEM, versus 27.4% for TEW-JOSEM. Additionally, the average relative bias of <span class="hlt">activity</span> concentration <span class="hlt">estimates</span> in the liver and the background for MC-JOSEM after 40 iterations was -3.9%, versus -12.2% for TEW-JOSEM, while the average relative standard deviation of these <span class="hlt">estimates</span> was 2.5% for MC-JOSEM, versus 3.4% for TEW-JOSEM. MC-JOSEM is a promising approach for quantitative <span class="hlt">activity</span> <span class="hlt">estimation</span> in 111In SPECT.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22126599','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22126599"><span>AN EFFICIENT APPROXIMATION TO THE <span class="hlt">LIKELIHOOD</span> FOR GRAVITATIONAL WAVE STOCHASTIC BACKGROUND DETECTION USING PULSAR TIMING DATA</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Ellis, J. A.; Siemens, X.; Van Haasteren, R.</p> <p>2013-05-20</p> <p>Direct detection of gravitational waves by pulsar timing arrays will become feasible over the next few years. In the low frequency regime (10{sup -7} Hz-10{sup -9} Hz), we expect that a superposition of gravitational waves from many sources will manifest itself as an isotropic stochastic gravitational wave background. Currently, a number of techniques exist to detect such a signal; however, many detection methods are computationally challenging. Here we introduce an approximation to the full <span class="hlt">likelihood</span> function for a pulsar timing array that results in computational savings proportional to the square of the number of pulsars in the array. Through a series of simulations we show that the approximate <span class="hlt">likelihood</span> function reproduces results obtained from the full <span class="hlt">likelihood</span> function. We further show, both analytically and through simulations, that, on average, this approximate <span class="hlt">likelihood</span> function gives unbiased parameter <span class="hlt">estimates</span> for astrophysically realistic stochastic background amplitudes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23537030','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23537030"><span>Measuring slope to improve energy expenditure <span class="hlt">estimates</span> during field-based <span class="hlt">activities</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Duncan, Glen E; Lester, Jonathan; Migotsky, Sean; Higgins, Lisa; Borriello, Gaetano</p> <p>2013-03-01</p> <p>This technical note describes methods to improve <span class="hlt">activity</span> energy expenditure <span class="hlt">estimates</span> by using a multi-sensor board (MSB) to measure slope. Ten adults walked over a 4-km (2.5-mile) course wearing an MSB and mobile calorimeter. Energy expenditure was <span class="hlt">estimated</span> using accelerometry alone (base) and 4 methods to measure slope. The barometer and global positioning system methods improved accuracy by 11% from the base (p < 0.05) to 86% overall. Measuring slope using the MSB improves energy expenditure <span class="hlt">estimates</span> during field-based <span class="hlt">activities</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JCoPh.309..267N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JCoPh.309..267N"><span>Spectral <span class="hlt">likelihood</span> expansions for Bayesian inference</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nagel, Joseph B.; Sudret, Bruno</p> <p>2016-03-01</p> <p>A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the <span class="hlt">likelihood</span> function in terms of orthogonal polynomials. From this spectral <span class="hlt">likelihood</span> expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhRvL.109m8105B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhRvL.109m8105B"><span>Transfer Entropy as a Log-<span class="hlt">Likelihood</span> Ratio</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barnett, Lionel; Bossomaier, Terry</p> <p>2012-09-01</p> <p>Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-<span class="hlt">likelihood</span> ratio test statistic for the null hypothesis of zero transfer entropy is a consistent <span class="hlt">estimator</span> for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy <span class="hlt">estimator</span>. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12916587','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12916587"><span>Time <span class="hlt">estimation</span> during prolonged sleep deprivation and its relation to <span class="hlt">activation</span> measures.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Miró, Elena; Cano, M Carmen; Espinosa-Fernández, Lourdes; Buela-Casal, Gualberto</p> <p>2003-01-01</p> <p>This is the first study to analyze variations in time <span class="hlt">estimation</span> during 60 h of sleep deprivation and the relation between time <span class="hlt">estimation</span> performance and the <span class="hlt">activation</span> measures of skin resistance level, body temperature, and Stanford Sleepiness Scale (SSS) scores. Among 30 healthy participants 18 to 24 years of age, for a 10-s interval using the production method, we found a lengthening in time <span class="hlt">estimations</span> that was modulated by circadian oscillations. No differences in gender were found in the time <span class="hlt">estimation</span> task during sleep deprivation. The variations in time <span class="hlt">estimation</span> correlated significantly with body temperature, skin resistance level, and SSS throughout the sleep deprivation period. When body temperature is elevated, indicating a high level of <span class="hlt">activation</span>, the interval tends to be underestimated, and vice versa. When the skin resistance level or SSS is elevated (low <span class="hlt">activation</span>), time <span class="hlt">estimation</span> is lengthened, and vice versa. This lengthening is important because many everyday situations involve duration <span class="hlt">estimation</span> under moderate to severe sleep loss. Actual or potential applications of this research include transportation systems, emergency response work, sporting <span class="hlt">activities</span>, and industrial settings in which accuracy in anticipation or coincidence timing is important for safety or efficiency.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4367198','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4367198"><span>Factors Associated with Young Adults’ Pregnancy <span class="hlt">Likelihood</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan</p> <p>2014-01-01</p> <p>OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived <span class="hlt">likelihood</span> of pregnancy (<span class="hlt">likelihood</span> of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some <span class="hlt">likelihood</span> of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy <span class="hlt">likelihood</span> (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20120018083&hterms=evaluation+model&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26Nf%3DPublication-Date%257CGT%2B20100101%26N%3D0%26No%3D40%26Ntt%3Devaluation%2Bmodel','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20120018083&hterms=evaluation+model&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26Nf%3DPublication-Date%257CGT%2B20100101%26N%3D0%26No%3D40%26Ntt%3Devaluation%2Bmodel"><span><span class="hlt">Likelihood</span>-Based Climate Model Evaluation</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Braverman, Amy; Cressie, Noel; Teixeira, Joao</p> <p>2012-01-01</p> <p>Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the <span class="hlt">likelihood</span> that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9803E..23J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9803E..23J"><span><span class="hlt">Likelihood</span>-free Bayesian computation for structural model calibration: a feasibility study</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jin, Seung-Seop; Jung, Hyung-Jo</p> <p>2016-04-01</p> <p>Finite element (FE) model updating is often used to associate FE models with corresponding existing structures for the condition assessment. FE model updating is an inverse problem and prone to be ill-posed and ill-conditioning when there are many errors and uncertainties in both an FE model and its corresponding measurements. In this case, it is important to quantify these uncertainties properly. Bayesian FE model updating is one of the well-known methods to quantify parameter uncertainty by updating our prior belief on the parameters with the available measurements. In Bayesian inference, <span class="hlt">likelihood</span> plays a central role in summarizing the overall residuals between model predictions and corresponding measurements. Therefore, <span class="hlt">likelihood</span> should be carefully chosen to reflect the characteristics of the residuals. It is generally known that very little or no information is available regarding the statistical characteristics of the residuals. In most cases, the <span class="hlt">likelihood</span> is assumed to be the independent identically distributed Gaussian distribution with the zero mean and constant variance. However, this assumption may cause biased and over/underestimated <span class="hlt">estimates</span> of parameters, so that the uncertainty quantification and prediction are questionable. To alleviate the potential misuse of the inadequate <span class="hlt">likelihood</span>, this study introduced approximate Bayesian computation (i.e., <span class="hlt">likelihood</span>-free Bayesian inference), which relaxes the need for an explicit <span class="hlt">likelihood</span> by analyzing the behavior similarities between model predictions and measurements. We performed FE model updating based on <span class="hlt">likelihood</span>-free Markov chain Monte Carlo (MCMC) without using the <span class="hlt">likelihood</span>. Based on the result of the numerical study, we observed that the <span class="hlt">likelihood</span>-free Bayesian computation can quantify the updating parameters correctly and its predictive capability for the measurements, not used in calibrated, is also secured.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22031726','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22031726"><span><span class="hlt">Likelihood</span> reinstates Archaeopteryx as a primitive bird.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Michael S Y; Worthy, Trevor H</p> <p>2012-04-23</p> <p>The widespread view that Archaeopteryx was a primitive (basal) bird has been recently challenged by a comprehensive phylogenetic analysis that placed Archaeopteryx with deinonychosaurian theropods. The new phylogeny suggested that typical bird flight (powered by the front limbs only) either evolved at least twice, or was lost/modified in some deinonychosaurs. However, this parsimony-based result was acknowledged to be weakly supported. Maximum-<span class="hlt">likelihood</span> and related Bayesian methods applied to the same dataset yield a different and more orthodox result: Archaeopteryx is restored as a basal bird with bootstrap frequency of 73 per cent and posterior probability of 1. These results are consistent with a single origin of typical (forelimb-powered) bird flight. The Archaeopteryx-deinonychosaur clade retrieved by parsimony is supported by more characters (which are on average more homoplasious), whereas the Archaeopteryx-bird clade retrieved by <span class="hlt">likelihood</span>-based methods is supported by fewer characters (but on average less homoplasious). Both positions for Archaeopteryx remain plausible, highlighting the hazy boundary between birds and advanced theropods. These results also suggest that <span class="hlt">likelihood</span>-based methods (in addition to parsimony) can be useful in morphological phylogenetics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28248383','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28248383"><span>Experimental infrared point-source detection using an iterative generalized <span class="hlt">likelihood</span> ratio test algorithm.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nichols, J M; Waterman, J R</p> <p>2017-03-01</p> <p>This work documents the performance of a recently proposed generalized <span class="hlt">likelihood</span> ratio test (GLRT) algorithm in detecting thermal point-source targets against a sky background. A calibrated source is placed above the horizon at various ranges and then imaged using a mid-wave infrared camera. The proposed algorithm combines a so-called "shrinkage" <span class="hlt">estimator</span> of the background covariance matrix and an iterative maximum <span class="hlt">likelihood</span> <span class="hlt">estimator</span> of the point-source parameters to produce the GLRT statistic. It is clearly shown that the proposed approach results in better detection performance than either standard energy detection or previous implementations of the GLRT detector.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26098764','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26098764"><span>Bayesian <span class="hlt">Estimation</span> of the <span class="hlt">Active</span> Concentration and Affinity Constants Using Surface Plasmon Resonance Technology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Feng, Feng; Kepler, Thomas B</p> <p>2015-01-01</p> <p>Surface plasmon resonance (SPR) has previously been employed to measure the <span class="hlt">active</span> concentration of analyte in addition to the kinetic rate constants in molecular binding reactions. Those approaches, however, have a few restrictions. In this work, a Bayesian approach is developed to determine both <span class="hlt">active</span> concentration and affinity constants using SPR technology. With the appropriate prior probabilities on the parameters and a derived <span class="hlt">likelihood</span> function, a Markov Chain Monte Carlo (MCMC) algorithm is applied to compute the posterior probability densities of both the <span class="hlt">active</span> concentration and kinetic rate constants based on the collected SPR data. Compared with previous approaches, ours exploits information from the duration of the process in its entirety, including both association and dissociation phases, under partial mass transport conditions; do not depend on calibration data; multiple injections of analyte at varying flow rates are not necessary. Finally the method is validated by analyzing both simulated and experimental datasets. A software package implementing our approach is developed with a user-friendly interface and made freely available.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16506972','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16506972"><span>Eliciting information from experts on the <span class="hlt">likelihood</span> of rapid climate change.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Arnell, Nigel W; Tompkins, Emma L; Adger, W Neil</p> <p>2005-12-01</p> <p>The threat of so-called rapid or abrupt climate change has generated considerable public interest because of its potentially significant impacts. The collapse of the North Atlantic Thermohaline Circulation or the West Antarctic Ice Sheet, for example, would have potentially catastrophic effects on temperatures and sea level, respectively. But how likely are such extreme climatic changes? Is it possible actually to <span class="hlt">estimate</span> <span class="hlt">likelihoods</span>? This article reviews the societal demand for the <span class="hlt">likelihoods</span> of rapid or abrupt climate change, and different methods for <span class="hlt">estimating</span> <span class="hlt">likelihoods</span>: past experience, model simulation, or through the elicitation of expert judgments. The article describes a survey to <span class="hlt">estimate</span> the <span class="hlt">likelihoods</span> of two characterizations of rapid climate change, and explores the issues associated with such surveys and the value of information produced. The surveys were based on key scientists chosen for their expertise in the climate science of abrupt climate change. Most survey respondents ascribed low <span class="hlt">likelihoods</span> to rapid climate change, due either to the collapse of the Thermohaline Circulation or increased positive feedbacks. In each case one assessment was an order of magnitude higher than the others. We explore a high rate of refusal to participate in this expert survey: many scientists prefer to rely on output from future climate model simulations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/21513091','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/21513091"><span>Mapping gravitational lensing of the CMB using local <span class="hlt">likelihoods</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Anderes, Ethan; Knox, Lloyd; Engelen, Alexander van</p> <p>2011-02-15</p> <p>We present a new <span class="hlt">estimation</span> method for mapping the gravitational lensing potential from observed CMB intensity and polarization fields. Our method uses Bayesian techniques to <span class="hlt">estimate</span> the average curvature of the potential over small local regions. These local curvatures are then used to construct an <span class="hlt">estimate</span> of a low pass filter of the gravitational potential. By utilizing Bayesian/<span class="hlt">likelihood</span> methods one can easily overcome problems with missing and/or nonuniform pixels and problems with partial sky observations (E- and B-mode mixing, for example). Moreover, our methods are local in nature, which allow us to easily model spatially varying beams, and are highly parallelizable. We note that our <span class="hlt">estimates</span> do not rely on the typical Taylor approximation which is used to construct <span class="hlt">estimates</span> of the gravitational potential by Fourier coupling. We present our methodology with a flat sky simulation under nearly ideal experimental conditions with a noise level of 1 {mu}K-arcmin for the temperature field, {radical}(2) {mu}K-arcmin for the polarization fields, with an instrumental beam full width at half maximum (FWHM) of 0.25 arcmin.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940012245','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940012245"><span>Maximum <span class="hlt">likelihood</span> techniques applied to quasi-elastic light scattering</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Edwards, Robert V.</p> <p>1992-01-01</p> <p>There is a necessity of having an automatic procedure for reliable <span class="hlt">estimation</span> of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error <span class="hlt">estimates</span> can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum <span class="hlt">Likelihood</span> <span class="hlt">Estimation</span> (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error <span class="hlt">estimates</span> for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then <span class="hlt">estimating</span> particle size parameters using a modified histogram approach.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25381107','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25381107"><span>Tracking of EEG <span class="hlt">activity</span> using motion <span class="hlt">estimation</span> to understand brain wiring.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nisar, Humaira; Malik, Aamir Saeed; Ullah, Rafi; Shim, Seong-O; Bawakid, Abdullah; Khan, Muhammad Burhan; Subhani, Ahmad Rauf</p> <p>2015-01-01</p> <p>The fundamental step in brain research deals with recording electroencephalogram (EEG) signals and then investigating the recorded signals quantitatively. Topographic EEG (visual spatial representation of EEG signal) is commonly referred to as brain topomaps or brain EEG maps. In this chapter, full search full search block motion <span class="hlt">estimation</span> algorithm has been employed to track the brain <span class="hlt">activity</span> in brain topomaps to understand the mechanism of brain wiring. The behavior of EEG topomaps is examined throughout a particular brain <span class="hlt">activation</span> with respect to time. Motion vectors are used to track the brain <span class="hlt">activation</span> over the scalp during the <span class="hlt">activation</span> period. Using motion <span class="hlt">estimation</span> it is possible to track the path from the starting point of <span class="hlt">activation</span> to the final point of <span class="hlt">activation</span>. Thus it is possible to track the path of a signal across various lobes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18932082','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18932082"><span>A wide range of <span class="hlt">activity</span> duration cutoffs provided unbiased <span class="hlt">estimates</span> of exposure to computer use.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chang, Che-Hsu; Johnson, Peter W; Dennerlein, Jack T</p> <p>2008-12-01</p> <p>Integrative computer usage monitors have become widely used in epidemiologic studies to investigate the exposure-response relationship of computer-related musculoskeletal disorders. These software programs typically <span class="hlt">estimate</span> the exposure duration of computer use by summing precisely recorded durations of input device <span class="hlt">activities</span> and durations of inactivity periods shorter than a predetermined <span class="hlt">activity</span> duration cutoff value, usually 30 or 60 sec. The goal of this study was to systematically compare the validity of a wide range of cutoff values. Computer use <span class="hlt">activity</span> of 20 office workers was observed for 4 consecutive hours using both a video camera and a usage monitor. Video recordings from the camera were analyzed using specific observational criteria to determine computer use duration. This observed duration then served as the reference and was compared with 238 <span class="hlt">estimates</span> of computer use duration calculated from the usage monitor data using <span class="hlt">activity</span> duration cutoffs ranging from 3 to 240 sec in 1-sec increments. <span class="hlt">Estimates</span> calculated with cutoffs ranging from 28 to 60 sec were highly correlated with the observed duration (Spearman's correlation 0.87 to 0.92) and had nearly ideal linear relationships with the observed duration (slopes and r-squares close to one, and intercepts close to zero). For the same range of cutoff values, when the observed and <span class="hlt">estimated</span> durations were compared for dichotomous exposure classification across participants, minimal exposure misclassification was observed. It is concluded that <span class="hlt">activity</span> duration cutoffs ranging from 28 to 60 sec provided unbiased <span class="hlt">estimates</span> of computer use duration.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23221109','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23221109"><span>Gauging the <span class="hlt">likelihood</span> of stable cavitation from ultrasound contrast agents.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bader, Kenneth B; Holland, Christy K</p> <p>2013-01-07</p> <p>The mechanical index (MI) was formulated to gauge the <span class="hlt">likelihood</span> of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble <span class="hlt">activity</span> from stable cavitation. This type of bubble <span class="hlt">activity</span> can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form I(CAV) = P(r)/f (where P(r) is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the <span class="hlt">likelihood</span> of subharmonic emissions due to stable cavitation <span class="hlt">activity</span> nucleated from UCAs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007ApJ...663L..33W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007ApJ...663L..33W"><span>On the <span class="hlt">Likelihood</span> of Supernova Enrichment of Protoplanetary Disks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Williams, Jonathan P.; Gaidos, Eric</p> <p>2007-07-01</p> <p>We <span class="hlt">estimate</span> the <span class="hlt">likelihood</span> of direct injection of supernova ejecta into protoplanetary disks using a model in which the number of stars with disks decreases linearly with time, and clusters expand linearly with time such that their surface density is independent of stellar number. The similarity of disk dissipation and main-sequence lifetimes implies that the typical supernova progenitor is very massive, ~75-100 Msolar. Such massive stars are found only in clusters with >~104 members. Moreover, there is only a small region around a supernova within which disks can survive the blast yet be enriched to the level observed in the solar system. These two factors limit the overall <span class="hlt">likelihood</span> of supernova enrichment of a protoplanetary disk to <~1%. If the presence of short-lived radionucleides in meteorites is to be explained in this way, however, the solar system most likely formed in one of the largest clusters in the Galaxy, more than 2 orders of magnitude greater than Orion, where multiple supernovae impacted many disks in a short period of time.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1461941','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1461941"><span>Assessing allelic dropout and genotype reliability using maximum <span class="hlt">likelihood</span>.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Miller, Craig R; Joyce, Paul; Waits, Lisette P</p> <p>2002-01-01</p> <p>A growing number of population genetic studies utilize nuclear DNA microsatellite data from museum specimens and noninvasive sources. Genotyping errors are elevated in these low quantity DNA sources, potentially compromising the power and accuracy of the data. The most conservative method for addressing this problem is effective, but requires extensive replication of individual genotypes. In search of a more efficient method, we developed a maximum-<span class="hlt">likelihood</span> approach that minimizes errors by <span class="hlt">estimating</span> genotype reliability and strategically directing replication at loci most likely to harbor errors. The model assumes that false and contaminant alleles can be removed from the dataset and that the allelic dropout rate is even across loci. Simulations demonstrate that the proposed method marks a vast improvement in efficiency while maintaining accuracy. When allelic dropout rates are low (0-30%), the reduction in the number of PCR replicates is typically 40-50%. The model is robust to moderate violations of the even dropout rate assumption. For datasets that contain false and contaminant alleles, a replication strategy is proposed. Our current model addresses only allelic dropout, the most prevalent source of genotyping error. However, the developed <span class="hlt">likelihood</span> framework can incorporate additional error-generating processes as they become more clearly understood. PMID:11805071</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21138291','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21138291"><span><span class="hlt">Likelihood</span> of achieving air quality targets under model uncertainties.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W</p> <p>2011-01-01</p> <p>Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for <span class="hlt">estimating</span> the <span class="hlt">likelihood</span> that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the <span class="hlt">likelihood</span> that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3381521','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3381521"><span>Generalised Linear Models Incorporating Population Level Information: An Empirical <span class="hlt">Likelihood</span> Based Approach</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chaudhuri, Sanjay; Handcock, Mark S.; Rendall, Michael S.</p> <p>2011-01-01</p> <p>In many situations information from a sample of individuals can be supplemented by population level information on the relationship between a dependent variable and explanatory variables. Inclusion of the population level information can reduce bias and increase the efficiency of the parameter <span class="hlt">estimates</span>. Population level information can be incorporated via constraints on functions of the model parameters. In general the constraints are nonlinear making the task of maximum <span class="hlt">likelihood</span> <span class="hlt">estimation</span> harder. In this paper we develop an alternative approach exploiting the notion of an empirical <span class="hlt">likelihood</span>. It is shown that within the framework of generalised linear models, the population level information corresponds to linear constraints, which are comparatively easy to handle. We provide a two-step algorithm that produces parameter <span class="hlt">estimates</span> using only unconstrained <span class="hlt">estimation</span>. We also provide computable expressions for the standard errors. We give an application to demographic hazard modelling by combining panel survey data with birth registration data to <span class="hlt">estimate</span> annual birth probabilities by parity. PMID:22740776</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19836253','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19836253"><span><span class="hlt">Estimation</span> of the <span class="hlt">activity</span> generated by neutron <span class="hlt">activation</span> in control rods of a BWR.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ródenas, José; Gallardo, Sergio; Abarca, Agustín; Juan, Violeta</p> <p>2010-01-01</p> <p>Control rods are <span class="hlt">activated</span> by neutron reactions into the reactor. The <span class="hlt">activation</span> is produced mainly in stainless steel and its impurities. The dose produced by this <span class="hlt">activity</span> is not important inside the reactor, but it has to be taken into account when the rod is withdrawn from the reactor. <span class="hlt">Activation</span> reactions produced have been modelled by the MCNP5 code based on the Monte Carlo method. The code gives the number of reactions that can be converted into <span class="hlt">activity</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17946636','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17946636"><span>Three-dimensional ventricular <span class="hlt">activation</span> imaging by means of equivalent current source modeling and <span class="hlt">estimation</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Z; Liu, C; He, B</p> <p>2006-01-01</p> <p>This paper presents a novel electrocardiographic inverse approach for imaging the 3-D ventricular <span class="hlt">activation</span> sequence based on the modeling and <span class="hlt">estimation</span> of the equivalent current density throughout the entire myocardial volume. The spatio-temporal coherence of the ventricular excitation process is utilized to derive the <span class="hlt">activation</span> time from the <span class="hlt">estimated</span> time course of the equivalent current density. At each time instant during the period of ventricular <span class="hlt">activation</span>, the distributed equivalent current density is noninvasively <span class="hlt">estimated</span> from body surface potential maps (BSPM) using a weighted minimum norm approach with a spatio-temporal regularization strategy based on the singular value decomposition of the BSPMs. The <span class="hlt">activation</span> time at any given location within the ventricular myocardium is determined as the time point with the maximum local current density <span class="hlt">estimate</span>. Computer simulation has been performed to evaluate the capability of this approach to image the 3-D ventricular <span class="hlt">activation</span> sequence initiated from a single pacing site in a physiologically realistic cellular automaton heart model. The simulation results demonstrate that the simulated "true" <span class="hlt">activation</span> sequence can be accurately reconstructed with an average correlation coefficient of 0.90, relative error of 0.19, and the origin of ventricular excitation can be localized with an average localization error of 5.5 mm for 12 different pacing sites distributed throughout the ventricles.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1083779','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1083779"><span>Shielding and <span class="hlt">activity</span> <span class="hlt">estimator</span> for template-based nuclide identification methods</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Nelson, Karl Einar</p> <p>2013-04-09</p> <p>According to one embodiment, a method for <span class="hlt">estimating</span> an <span class="hlt">activity</span> of one or more radio-nuclides includes receiving one or more templates, the one or more templates corresponding to one or more radio-nuclides which contribute to a probable solution, receiving one or more weighting factors, each weighting factor representing a contribution of one radio-nuclide to the probable solution, computing an effective areal density for each of the one more radio-nuclides, computing an effective atomic number (Z) for each of the one more radio-nuclides, computing an effective metric for each of the one or more radio-nuclides, and computing an <span class="hlt">estimated</span> <span class="hlt">activity</span> for each of the one or more radio-nuclides. In other embodiments, computer program products, systems, and other methods are presented for <span class="hlt">estimating</span> an <span class="hlt">activity</span> of one or more radio-nuclides.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26263302','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26263302"><span>A Maximum-<span class="hlt">Likelihood</span> Approach to Force-Field Calibration.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam</p> <p>2015-09-28</p> <p>); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-<span class="hlt">likelihood</span> parameter <span class="hlt">estimation</span> when the contributions to the maximum-<span class="hlt">likelihood</span> function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014A%26A...564A..49P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014A%26A...564A..49P"><span>An updated maximum <span class="hlt">likelihood</span> approach to open cluster distance determination</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Palmer, M.; Arenou, F.; Luri, X.; Masana, E.</p> <p>2014-04-01</p> <p>Aims: An improved method for <span class="hlt">estimating</span> distances to open clusters is presented and applied to Hipparcos data for the Pleiades and the Hyades. The method is applied in the context of the historic Pleiades distance problem, with a discussion of previous criticisms of Hipparcos parallaxes. This is followed by an outlook for Gaia, where the improved method could be especially useful. Methods: Based on maximum <span class="hlt">likelihood</span> <span class="hlt">estimation</span>, the method combines parallax, position, apparent magnitude, colour, proper motion, and radial velocity information to <span class="hlt">estimate</span> the parameters describing an open cluster precisely and without bias. Results: We find the distance to the Pleiades to be 120.3 ± 1.5 pc, in accordance with previously published work using the same dataset. We find that error correlations cannot be responsible for the still present discrepancy between Hipparcos and photometric methods. Additionally, the three-dimensional space velocity and physical structure of Pleiades is parametrised, where we find strong evidence of mass segregation. The distance to the Hyades is found to be 46.35 ± 0.35 pc, also in accordance with previous results. Through the use of simulations, we confirm that the method is unbiased, so will be useful for accurate open cluster parameter <span class="hlt">estimation</span> with Gaia at distances up to several thousand parsec. Appendices are available in electronic form at http://www.aanda.org</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CoPhC.213..252H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CoPhC.213..252H"><span>LIKEDM: <span class="hlt">Likelihood</span> calculator of dark matter detection</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang</p> <p>2017-04-01</p> <p>With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the <span class="hlt">likelihoods</span> of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, <span class="hlt">likelihood</span> calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/457648','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/457648"><span>Maximum <span class="hlt">likelihood</span> decoding of Reed Solomon Codes</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Sudan, M.</p> <p>1996-12-31</p> <p>We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum <span class="hlt">likelihood</span> decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum <span class="hlt">likelihood</span> decoding for any efficient (i.e., constant or even polynomial rate) code.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PMB....60.3193H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PMB....60.3193H"><span><span class="hlt">Estimation</span> of dynamic time <span class="hlt">activity</span> curves from dynamic cardiac SPECT imaging</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hossain, J.; Du, Y.; Links, J.; Rahmim, A.; Karakatsanis, N.; Akhbardeh, A.; Lyons, J.; Frey, E. C.</p> <p>2015-04-01</p> <p>Whole-heart coronary flow reserve (CFR) may be useful as an early predictor of cardiovascular disease or heart failure. Here we propose a simple method to extract the time-<span class="hlt">activity</span> curve, an essential component needed for <span class="hlt">estimating</span> the CFR, for a small number of compartments in the body, such as normal myocardium, blood pool, and ischemic myocardial regions, from SPECT data acquired with conventional cameras using slow rotation. We evaluated the method using a realistic simulation of 99mTc-teboroxime imaging. Uptake of 99mTc-teboroxime based on data from the literature were modeled. Data were simulated using the anatomically-realistic 3D NCAT phantom and an analytic projection code that realistically models attenuation, scatter, and the collimator-detector response. The proposed method was then applied to <span class="hlt">estimate</span> time <span class="hlt">activity</span> curves (TACs) for a set of 3D volumes of interest (VOIs) directly from the projections. We evaluated the accuracy and precision of <span class="hlt">estimated</span> TACs and studied the effects of the presence of perfusion defects that were and were not modeled in the <span class="hlt">estimation</span> procedure. The method produced good <span class="hlt">estimates</span> of the myocardial and blood-pool TACS organ VOIs, with average weighted absolute biases of less than 5% for the myocardium and 10% for the blood pool when the true organ boundaries were known and the <span class="hlt">activity</span> distributions in the organs were uniform. In the presence of unknown perfusion defects, the myocardial TAC was still <span class="hlt">estimated</span> well (average weighted absolute bias <10%) when the total reduction in myocardial uptake (product of defect extent and severity) was ≤5%. This indicates that the method was robust to modest model mismatch such as the presence of moderate perfusion defects and uptake nonuniformities. With larger defects where the defect VOI was included in the <span class="hlt">estimation</span> procedure, the <span class="hlt">estimated</span> normal myocardial and defect TACs were accurate (average weighted absolute bias ≈5% for a defect with 25% extent and 100% severity).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4618555','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4618555"><span>The Work and Home <span class="hlt">Activities</span> Questionnaire: Energy Expenditure <span class="hlt">Estimates</span> and Association With Percent Body Fat</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Block, Gladys; Jensen, Christopher D.; Block, Torin J.; Norris, Jean; Dalvi, Tapashi B.; Fung, Ellen B.</p> <p>2015-01-01</p> <p>Background Understanding and increasing physical <span class="hlt">activity</span> requires assessment of occupational, home, leisure and sedentary <span class="hlt">activities</span>. Methods A physical <span class="hlt">activity</span> questionnaire was developed using data from a large representative U.S. sample; includes occupational, leisure and home-based domains; and produces <span class="hlt">estimates</span> of energy expenditure, percent body fat, minutes in various domains, and meeting recommendations. It was tested in 396 persons, mean age 44 years. <span class="hlt">Estimates</span> were evaluated in relation to percent body fat measured by dual-energy x-ray absorptiometry. Results Median energy expenditure was 2,365 kcal (women) and 2.960 kcal (men). Women spent 35.1 minutes/day in moderate household <span class="hlt">activities</span>, 13.0 minutes in moderate leisure and 4.0 minutes in vigorous <span class="hlt">activities</span>. Men spent 18.0, 22.5 and 15.6 minutes/day in those <span class="hlt">activities</span>, respectively. Men and women spent 276.4 and 257.0 minutes/day in sedentary <span class="hlt">activities</span>. Respondents who met recommendations through vigorous <span class="hlt">activities</span> had significantly lower percent body fat than those who did not, while meeting recommendations only through moderate <span class="hlt">activities</span> was not associated with percent body fat. Predicted and observed percent body fat correlated at r = .73 and r = .82 for men and women respectively, P < .0001. Conclusions This questionnaire may be useful for understanding health effects of different components of <span class="hlt">activity</span>, and for interventions to increase <span class="hlt">activity</span> levels. PMID:19998851</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014A%26A...571A..15P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014A%26A...571A..15P"><span>Planck 2013 results. XV. CMB power spectra and <span class="hlt">likelihood</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Leach, S.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Orieux, F.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.</p> <p>2014-11-01</p> <p>This paper presents the Planck 2013 <span class="hlt">likelihood</span>, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this <span class="hlt">likelihood</span> to derive our best <span class="hlt">estimate</span> of the CMB angular power spectrum from Planck over three decades in multipole moment, ℓ, covering 2 ≤ ℓ ≤ 2500. The main source of uncertainty at ℓ ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ℓs. For ℓ < 50, our <span class="hlt">likelihood</span> exploits all Planck frequency channels from 30 to 353 GHz, separating the cosmological CMB signal from diffuse Galactic foregrounds through a physically motivated Bayesian component separation technique. At ℓ ≥ 50, we employ a correlated Gaussian <span class="hlt">likelihood</span> approximation based on a fine-grained set of angular cross-spectra derived from multiple detector combinations between the 100, 143, and 217 GHz frequency channels, marginalising over power spectrum foreground templates. We validate our <span class="hlt">likelihood</span> through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-ℓ cross-spectra with residuals below a few μK2 at ℓ ≲ 1000, in agreement with <span class="hlt">estimated</span> calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at ℓ ≲ 1500. One specific example is the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=uncertainty&pg=7&id=EJ948897','ERIC'); return false;" href="http://eric.ed.gov/?q=uncertainty&pg=7&id=EJ948897"><span>Developmental Changes in Children's Understanding of Future <span class="hlt">Likelihood</span> and Uncertainty</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Lagattuta, Kristin Hansen; Sayfan, Liat</p> <p>2011-01-01</p> <p>Two measures assessed 4-10-year-olds' and adults' (N = 201) understanding of future <span class="hlt">likelihood</span> and uncertainty. In one task, participants sequenced sets of event pictures varying by one physical dimension according to increasing future <span class="hlt">likelihood</span>. In a separate task, participants rated characters' thoughts about the <span class="hlt">likelihood</span> of future events,…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25400541','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25400541"><span>Cortical connective field <span class="hlt">estimates</span> from resting state fMRI <span class="hlt">activity</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gravel, Nicolás; Harvey, Ben; Nordhjem, Barbara; Haak, Koen V; Dumoulin, Serge O; Renken, Remco; Curčić-Blake, Branislava; Cornelissen, Frans W</p> <p>2014-01-01</p> <p>One way to study connectivity in visual cortical areas is by examining spontaneous neural <span class="hlt">activity</span>. In the absence of visual input, such <span class="hlt">activity</span> remains shaped by the underlying neural architecture and, presumably, may still reflect visuotopic organization. Here, we applied population connective field (CF) modeling to <span class="hlt">estimate</span> the spatial profile of functional connectivity in the early visual cortex during resting state functional magnetic resonance imaging (RS-fMRI). This model-based analysis <span class="hlt">estimates</span> the spatial integration between blood-oxygen level dependent (BOLD) signals in distinct cortical visual field maps using fMRI. Just as population receptive field (pRF) mapping predicts the collective neural <span class="hlt">activity</span> in a voxel as a function of response selectivity to stimulus position in visual space, CF modeling predicts the <span class="hlt">activity</span> of voxels in one visual area as a function of the aggregate <span class="hlt">activity</span> in voxels in another visual area. In combination with pRF mapping, CF locations on the cortical surface can be interpreted in visual space, thus enabling reconstruction of visuotopic maps from resting state data. We demonstrate that V1 ➤ V2 and V1 ➤ V3 CF maps <span class="hlt">estimated</span> from resting state fMRI data show visuotopic organization. Therefore, we conclude that-despite some variability in CF <span class="hlt">estimates</span> between RS scans-neural properties such as CF maps and CF size can be derived from resting state data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4215614','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4215614"><span>Cortical connective field <span class="hlt">estimates</span> from resting state fMRI <span class="hlt">activity</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gravel, Nicolás; Harvey, Ben; Nordhjem, Barbara; Haak, Koen V.; Dumoulin, Serge O.; Renken, Remco; Ćurčić-Blake, Branislava; Cornelissen, Frans W.</p> <p>2014-01-01</p> <p>One way to study connectivity in visual cortical areas is by examining spontaneous neural <span class="hlt">activity</span>. In the absence of visual input, such <span class="hlt">activity</span> remains shaped by the underlying neural architecture and, presumably, may still reflect visuotopic organization. Here, we applied population connective field (CF) modeling to <span class="hlt">estimate</span> the spatial profile of functional connectivity in the early visual cortex during resting state functional magnetic resonance imaging (RS-fMRI). This model-based analysis <span class="hlt">estimates</span> the spatial integration between blood-oxygen level dependent (BOLD) signals in distinct cortical visual field maps using fMRI. Just as population receptive field (pRF) mapping predicts the collective neural <span class="hlt">activity</span> in a voxel as a function of response selectivity to stimulus position in visual space, CF modeling predicts the <span class="hlt">activity</span> of voxels in one visual area as a function of the aggregate <span class="hlt">activity</span> in voxels in another visual area. In combination with pRF mapping, CF locations on the cortical surface can be interpreted in visual space, thus enabling reconstruction of visuotopic maps from resting state data. We demonstrate that V1 ➤ V2 and V1 ➤ V3 CF maps <span class="hlt">estimated</span> from resting state fMRI data show visuotopic organization. Therefore, we conclude that—despite some variability in CF <span class="hlt">estimates</span> between RS scans—neural properties such as CF maps and CF size can be derived from resting state data. PMID:25400541</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5343572','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5343572"><span>Multiunit <span class="hlt">Activity</span>-Based Real-Time Limb-State <span class="hlt">Estimation</span> from Dorsal Root Ganglion Recordings</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Han, Sungmin; Chu, Jun-Uk; Kim, Hyungmin; Park, Jong Woong; Youn, Inchan</p> <p>2017-01-01</p> <p>Proprioceptive afferent <span class="hlt">activities</span> could be useful for providing sensory feedback signals for closed-loop control during functional electrical stimulation (FES). However, most previous studies have used the single-unit <span class="hlt">activity</span> of individual neurons to extract sensory information from proprioceptive afferents. This study proposes a new decoding method to <span class="hlt">estimate</span> ankle and knee joint angles using multiunit <span class="hlt">activity</span> data. Proprioceptive afferent signals were recorded from a dorsal root ganglion with a single-shank microelectrode during passive movements of the ankle and knee joints, and joint angles were measured as kinematic data. The mean absolute value (MAV) was extracted from the multiunit <span class="hlt">activity</span> data, and a dynamically driven recurrent neural network (DDRNN) was used to <span class="hlt">estimate</span> ankle and knee joint angles. The multiunit <span class="hlt">activity</span>-based MAV feature was sufficiently informative to <span class="hlt">estimate</span> limb states, and the DDRNN showed a better decoding performance than conventional linear <span class="hlt">estimators</span>. In addition, processing time delay satisfied real-time constraints. These results demonstrated that the proposed method could be applicable for providing real-time sensory feedback signals in closed-loop FES systems. PMID:28276474</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24402448','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24402448"><span>An examination of the differences between two methods of <span class="hlt">estimating</span> energy expenditure in resistance training <span class="hlt">activities</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vezina, Jesse W; Der Ananian, Cheryl A; Campbell, Kathryn D; Meckes, Nathanael; Ainsworth, Barbara E</p> <p>2014-04-01</p> <p>To date, few studies have looked at the energy expenditure (EE) of individual resistance training (RT) exercises. The purpose of this study was to evaluate the EE of 4 modes of RT (push-ups, curl-ups, pull-ups, and lunges) using 2 different calculation methods for <span class="hlt">estimating</span> EE. Twelve healthy men with a minimum of 1 year of RT experience were randomly assigned to an RT circuit. Each circuit contained the 4 RT exercises in a specified order. The participants completed 3 trials of their assigned circuit during one visit to the laboratory. Oxygen consumption was measured continuously throughout the trial using indirect calorimetry. Two different calculation methods were applied to <span class="hlt">estimate</span> EE. Using the traditional method (TEC), we <span class="hlt">estimated</span> EE by calculating the average oxygen consumption recorded during each <span class="hlt">activity</span>. Using the second, nontraditional method (NEC), we <span class="hlt">estimated</span> EE by calculating the average oxygen consumption recorded during the recovery period. Independent T-tests were used to evaluate mean EE differences between the 2 methods. <span class="hlt">Estimates</span> of EE obtained from the NEC were significantly higher for all the 4 <span class="hlt">activities</span> (p < 0.001). Using the NEC, 3 of the 4 <span class="hlt">activities</span> were classified as vigorous intensity (push-ups: 6.91 metabolic equivalents (METs); lunges: 7.52 METs; and pull-ups: 8.03 METs), whereas none were classified as vigorous using the TEC. Findings suggest that the methods we use to calculate the EE of anaerobic <span class="hlt">activities</span> significantly affect EE <span class="hlt">estimates</span>. Using the TEC may underestimate actual EE of anaerobic <span class="hlt">activities</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=estimator&pg=4&id=EJ922630','ERIC'); return false;" href="http://eric.ed.gov/?q=estimator&pg=4&id=EJ922630"><span>Bias and Efficiency in Structural Equation Modeling: Maximum <span class="hlt">Likelihood</span> versus Robust Methods</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Zhong, Xiaoling; Yuan, Ke-Hai</p> <p>2011-01-01</p> <p>In the structural equation modeling literature, the normal-distribution-based maximum <span class="hlt">likelihood</span> (ML) method is most widely used, partly because the resulting <span class="hlt">estimator</span> is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=composites&pg=3&id=EJ968025','ERIC'); return false;" href="http://eric.ed.gov/?q=composites&pg=3&id=EJ968025"><span>A Composite <span class="hlt">Likelihood</span> Inference in Latent Variable Models for Ordinal Longitudinal Responses</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini</p> <p>2012-01-01</p> <p>The paper proposes a composite <span class="hlt">likelihood</span> <span class="hlt">estimation</span> approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=poisson&pg=5&id=EJ766745','ERIC'); return false;" href="http://eric.ed.gov/?q=poisson&pg=5&id=EJ766745"><span>Multilevel and Latent Variable Modeling with Composite Links and Exploded <span class="hlt">Likelihoods</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Rabe-Hesketh, Sophia; Skrondal, Anders</p> <p>2007-01-01</p> <p>Composite links and exploded <span class="hlt">likelihoods</span> are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area <span class="hlt">estimation</span> with census information, models for ordinal responses, item response models with guessing, randomized response models,…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED302563.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED302563.pdf"><span>Comparison of Maximum <span class="hlt">Likelihood</span> and Pearson Chi-Square Statistics for Assessing Latent Class Models.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Holt, Judith A.; Macready, George B.</p> <p></p> <p>When latent class parameters are <span class="hlt">estimated</span>, maximum <span class="hlt">likelihood</span> and Pearson chi-square statistics can be derived for assessing the fit of the model to the data. This study used simulated data to compare these two statistics, and is based on mixtures of latent binomial distributions, using data generated from five dichotomous manifest variables.…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=time+AND+series&pg=3&id=EJ978572','ERIC'); return false;" href="http://eric.ed.gov/?q=time+AND+series&pg=3&id=EJ978572"><span>Maximum <span class="hlt">Likelihood</span> Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman</p> <p>2012-01-01</p> <p>This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum <span class="hlt">likelihood</span> <span class="hlt">estimates</span> for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Major+AND+depression&pg=2&id=EJ966093','ERIC'); return false;" href="http://eric.ed.gov/?q=Major+AND+depression&pg=2&id=EJ966093"><span><span class="hlt">Likelihood</span> of Suicidality at Varying Levels of Depression Severity: A Re-Analysis of NESARC Data</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Uebelacker, Lisa A.; Strong, David; Weinstock, Lauren M.; Miller, Ivan W.</p> <p>2010-01-01</p> <p>Although it is clear that increasing depression severity is associated with more risk for suicidality, less is known about at what levels of depression severity the risk for different suicide symptoms increases. We used item response theory to <span class="hlt">estimate</span> the <span class="hlt">likelihood</span> of endorsing suicide symptoms across levels of depression severity in an…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23990244','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23990244"><span>Neural network versus <span class="hlt">activity</span>-specific prediction equations for energy expenditure <span class="hlt">estimation</span> in children.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ruch, Nicole; Joss, Franziska; Jimmy, Gerda; Melzer, Katarina; Hänggi, Johanna; Mäder, Urs</p> <p>2013-11-01</p> <p>The aim of this study was to compare the energy expenditure (EE) <span class="hlt">estimations</span> of <span class="hlt">activity</span>-specific prediction equations (ASPE) and of an artificial neural network (ANNEE) based on accelerometry with measured EE. Forty-three children (age: 9.8 ± 2.4 yr) performed eight different <span class="hlt">activities</span>. They were equipped with one tri-axial accelerometer that collected data in 1-s epochs and a portable gas analyzer. The ASPE and the ANNEE were trained to <span class="hlt">estimate</span> the EE by including accelerometry, age, gender, and weight of the participants. To provide the <span class="hlt">activity</span>-specific information, a decision tree was trained to recognize the type of <span class="hlt">activity</span> through accelerometer data. The ASPE were applied to the <span class="hlt">activity</span>-type-specific data recognized by the tree (Tree-ASPE). The Tree-ASPE precisely <span class="hlt">estimated</span> the EE of all <span class="hlt">activities</span> except cycling [bias: -1.13 ± 1.33 metabolic equivalent (MET)] and walking (bias: 0.29 ± 0.64 MET; P < 0.05). The ANNEE overestimated the EE of stationary <span class="hlt">activities</span> (bias: 0.31 ± 0.47 MET) and walking (bias: 0.61 ± 0.72 MET) and underestimated the EE of cycling (bias: -0.90 ± 1.18 MET; P < 0.05). Biases of EE in stationary <span class="hlt">activities</span> (ANNEE: 0.31 ± 0.47 MET, Tree-ASPE: 0.08 ± 0.21 MET) and walking (ANNEE 0.61 ± 0.72 MET, Tree-ASPE: 0.29 ± 0.64 MET) were significantly smaller in the Tree-ASPE than in the ANNEE (P < 0.05). The Tree-ASPE was more precise in <span class="hlt">estimating</span> the EE than the ANNEE. The use of <span class="hlt">activity</span>-type-specific information for subsequent EE prediction equations might be a promising approach for future studies.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JNEng..13f6017C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JNEng..13f6017C"><span>EEG-fMRI Bayesian framework for neural <span class="hlt">activity</span> <span class="hlt">estimation</span>: a simulation study</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo</p> <p>2016-12-01</p> <p>Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural <span class="hlt">activity</span> <span class="hlt">estimation</span>. In this simulation study we want to show the benefit of the joint EEG-fMRI neural <span class="hlt">activity</span> <span class="hlt">estimation</span> in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural <span class="hlt">activity</span> time course <span class="hlt">estimation</span>. The neural <span class="hlt">activity</span> is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural <span class="hlt">activity</span> situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural <span class="hlt">activity</span> reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural <span class="hlt">activity</span> reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3235525','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3235525"><span>Non-invasive <span class="hlt">Estimation</span> of Global <span class="hlt">Activation</span> Sequence using the Extended Kalman Filter</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Chenguang; He, Bin</p> <p>2011-01-01</p> <p>A new algorithm for three-dimensional (3D) imaging of the <span class="hlt">activation</span> sequence from noninvasive body surface potentials is proposed. After formulating the nonlinear relationship between the 3D <span class="hlt">activation</span> sequence and the body surface recordings during <span class="hlt">activation</span>, the extended Kalman filter (EKF) is utilized to <span class="hlt">estimate</span> the <span class="hlt">activation</span> sequence in a recursive way. The state vector containing the <span class="hlt">activation</span> sequence is optimized during iteration by updating the error covariance matrix. A new regularization scheme is incorporated into the “predict” procedure of EKF to tackle the ill-posedness of the inverse problem. The EKF based algorithm shows good performance in simulation under single-site pacing. Between the <span class="hlt">estimated</span> <span class="hlt">activation</span> sequences and true values, the average correlation coefficient (CC) is 0.95, and the relative error (RE) is 0.13. The average localization error (LE) when localizing the pacing site is 3.0 mm. Good results are also obtained under dual-site pacing (CC = 0.93, RE = 0.16, LE = 4.3 mm). Furthermore, the algorithm shows robustness to noise. The present promising results demonstrate that the proposed EKF-based inverse approach can noninvasively <span class="hlt">estimate</span> the 3D <span class="hlt">activation</span> sequence with good accuracy and the new algorithm shows good features due to the application of EKF. PMID:20716498</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009IEITF..92.3284Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009IEITF..92.3284Y"><span>Video-Quality <span class="hlt">Estimation</span> Based on Reduced-Reference Model Employing <span class="hlt">Activity</span>-Difference</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yamada, Toru; Miyamoto, Yoshihiro; Senda, Yuzo; Serizawa, Masahiro</p> <p></p> <p>This paper presents a Reduced-reference based video-quality <span class="hlt">estimation</span> method suitable for individual end-user quality monitoring of IPTV services. With the proposed method, the <span class="hlt">activity</span> values for individual given-size pixel blocks of an original video are transmitted to end-user terminals. At the end-user terminals, the video quality of a received video is <span class="hlt">estimated</span> on the basis of the <span class="hlt">activity</span>-difference between the original video and the received video. Psychovisual weightings and video-quality score adjustments for fatal degradations are applied to improve <span class="hlt">estimation</span> accuracy. In addition, low-bit-rate transmission is achieved by using temporal sub-sampling and by transmitting only the lower six bits of each <span class="hlt">activity</span> value. The proposed method achieves accurate video quality <span class="hlt">estimation</span> using only low-bit-rate original video information (15kbps for SDTV). The correlation coefficient between actual subjective video quality and <span class="hlt">estimated</span> quality is 0.901 with 15kbps side information. The proposed method does not need computationally demanding spatial and gain-and-offset registrations. Therefore, it is suitable for real-time video-quality monitoring in IPTV services.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4441447','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4441447"><span><span class="hlt">Estimating</span> Physical <span class="hlt">Activity</span> Energy Expenditure with the Kinect Sensor in an Exergaming Environment</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nathan, David; Huynh, Du Q.; Rubenson, Jonas; Rosenberg, Michael</p> <p>2015-01-01</p> <p><span class="hlt">Active</span> video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, <span class="hlt">active</span> video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to <span class="hlt">estimate</span> the mechanical work performed by the human body and <span class="hlt">estimate</span> subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. <span class="hlt">Estimations</span> of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical <span class="hlt">activity</span> energy expenditure during exercise, using the Kinect camera as a motion capture system, can be <span class="hlt">estimated</span> from segmental mechanical work. <span class="hlt">Estimates</span> for high-energy <span class="hlt">activities</span>, such as standing jumps and jumping jacks, can be made accurately, but for low-energy <span class="hlt">activities</span>, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the <span class="hlt">active</span> video gaming</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26000460','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26000460"><span><span class="hlt">Estimating</span> physical <span class="hlt">activity</span> energy expenditure with the Kinect Sensor in an exergaming environment.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nathan, David; Huynh, Du Q; Rubenson, Jonas; Rosenberg, Michael</p> <p>2015-01-01</p> <p><span class="hlt">Active</span> video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, <span class="hlt">active</span> video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to <span class="hlt">estimate</span> the mechanical work performed by the human body and <span class="hlt">estimate</span> subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. <span class="hlt">Estimations</span> of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical <span class="hlt">activity</span> energy expenditure during exercise, using the Kinect camera as a motion capture system, can be <span class="hlt">estimated</span> from segmental mechanical work. <span class="hlt">Estimates</span> for high-energy <span class="hlt">activities</span>, such as standing jumps and jumping jacks, can be made accurately, but for low-energy <span class="hlt">activities</span>, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the <span class="hlt">active</span> video gaming</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3840299','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3840299"><span>Dorsomedial prefrontal cortex <span class="hlt">activity</span> predicts the accuracy in <span class="hlt">estimating</span> others' preferences</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kang, Pyungwon; Lee, Jongbin; Sul, Sunhae; Kim, Hackjin</p> <p>2013-01-01</p> <p>The ability to accurately <span class="hlt">estimate</span> another person's preferences is crucial for a successful social life. In daily interactions, we often do this on the basis of minimal information. The aims of the present study were (a) to examine whether people can accurately judge others based only on a brief exposure to their appearances, and (b) to reveal the underlying neural mechanisms with functional magnetic resonance imaging (fMRI). Participants were asked to make guesses about unfamiliar target individuals' preferences for various items after looking at their faces for 3 s. The behavioral results showed that participants <span class="hlt">estimated</span> others' preferences above chance level. The fMRI data revealed that higher accuracy in preference <span class="hlt">estimation</span> was associated with greater <span class="hlt">activity</span> in the dorsomedial prefrontal cortex (DMPFC) when participants were guessing the targets' preferences relative to thinking about their own preferences. These findings suggest that accurate <span class="hlt">estimations</span> of others' preferences may require increased <span class="hlt">activity</span> in the DMPFC. A functional connectivity analysis revealed that higher accuracy in preference <span class="hlt">estimation</span> was related to increased functional connectivity between the DMPFC and the brain regions that are known to be involved in theory of mind processing, such as the temporoparietal junction (TPJ) and the posterior cingulate cortex (PCC)/precuneus, during correct vs. incorrect guessing trials. On the contrary, the tendency to refer to self-preferences when <span class="hlt">estimating</span> others' preference was related to greater <span class="hlt">activity</span> in the ventromedial prefrontal cortex. These findings imply that the DMPFC may be a core region in <span class="hlt">estimating</span> the preferences of others and that higher accuracy may require stronger communication between the DMPFC and the TPJ and PCC/precuneus, part of a neural network known to be engaged in mentalizing. PMID:24324419</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=323267','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=323267"><span>Assimilation of <span class="hlt">active</span> and passive microwave observations for improved <span class="hlt">estimates</span> of soil moisture and crop growth</span></a></p> <p><a target="_blank" href="http://www.ars.usda.gov/services/TekTran.htm">Technology Transfer Automated Retrieval System (TEKTRAN)</a></p> <p></p> <p></p> <p>An Ensemble Kalman Filter-based data assimilation framework that links a crop growth model with <span class="hlt">active</span> and passive (AP) microwave models was developed to improve <span class="hlt">estimates</span> of soil moisture (SM) and vegetation biomass over a growing season of soybean. Complementarities in AP observations were incorpo...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=229061','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=229061"><span><span class="hlt">Estimates</span> of genetic parameters among scale <span class="hlt">activity</span> scores, growth, and fatness in pigs</span></a></p> <p><a target="_blank" href="http://www.ars.usda.gov/services/TekTran.htm">Technology Transfer Automated Retrieval System (TEKTRAN)</a></p> <p></p> <p></p> <p>Genetic parameters for scale <span class="hlt">activity</span> score were <span class="hlt">estimated</span> from generations 5, 6, and 7 of a randomly selected, composite population composed of Duroc, Large White, and two sources of Landrace (n = 2,186). At approximately 156 d of age, pigs were weighed (WT) and ultrasound backfat measurements (BF1...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2266132','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2266132"><span>The <span class="hlt">Estimation</span> of Cortical <span class="hlt">Activity</span> for Brain-Computer Interface: Applications in a Domotic Context</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Babiloni, F.; Cincotti, F.; Marciani, M.; Salinari, S.; Astolfi, L.; Tocci, A.; Aloise, F.; Fallani, F. De Vico; Bufalari, S.; Mattia, D.</p> <p>2007-01-01</p> <p>In order to analyze whether the use of the cortical <span class="hlt">activity</span>, <span class="hlt">estimated</span> from noninvasive EEG recordings, could be useful to detect mental states related to the imagination of limb movements, we <span class="hlt">estimate</span> cortical <span class="hlt">activity</span> from high-resolution EEG recordings in a group of healthy subjects by using realistic head models. Such cortical <span class="hlt">activity</span> was <span class="hlt">estimated</span> in region of interest associated with the subject's Brodmann areas by using a depth-weighted minimum norm technique. Results showed that the use of the cortical-<span class="hlt">estimated</span> <span class="hlt">activity</span> instead of the unprocessed EEG improves the recognition of the mental states associated to the limb movement imagination in the group of normal subjects. The BCI methodology presented here has been used in a group of disabled patients in order to give them a suitable control of several electronic devices disposed in a three-room environment devoted to the neurorehabilitation. Four of six patients were able to control several electronic devices in this domotic context with the BCI system. PMID:18350134</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18350134','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18350134"><span>The <span class="hlt">estimation</span> of cortical <span class="hlt">activity</span> for brain-computer interface: applications in a domotic context.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Babiloni, F; Cincotti, F; Marciani, M; Salinari, S; Astolfi, L; Tocci, A; Aloise, F; De Vico Fallani, F; Bufalari, S; Mattia, D</p> <p>2007-01-01</p> <p>In order to analyze whether the use of the cortical <span class="hlt">activity</span>, <span class="hlt">estimated</span> from noninvasive EEG recordings, could be useful to detect mental states related to the imagination of limb movements, we <span class="hlt">estimate</span> cortical <span class="hlt">activity</span> from high-resolution EEG recordings in a group of healthy subjects by using realistic head models. Such cortical <span class="hlt">activity</span> was <span class="hlt">estimated</span> in region of interest associated with the subject's Brodmann areas by using a depth-weighted minimum norm technique. Results showed that the use of the cortical-<span class="hlt">estimated</span> <span class="hlt">activity</span> instead of the unprocessed EEG improves the recognition of the mental states associated to the limb movement imagination in the group of normal subjects. The BCI methodology presented here has been used in a group of disabled patients in order to give them a suitable control of several electronic devices disposed in a three-room environment devoted to the neurorehabilitation. Four of six patients were able to control several electronic devices in this domotic context with the BCI system.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920017650','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920017650"><span>Modelling default and <span class="hlt">likelihood</span> reasoning as probabilistic</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Buntine, Wray</p> <p>1990-01-01</p> <p>A probabilistic analysis of plausible reasoning about defaults and about <span class="hlt">likelihood</span> is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <center> <div class="footer-extlink text-muted"><small>Some links on this page may take you to non-federal websites. Their policies may differ from this site.</small> </div> </center> <div id="footer-wrapper"> <div class="footer-content"> <div id="footerOSTI" class=""> <div class="row"> <div class="col-md-4 text-center col-md-push-4 footer-content-center"><small><a href="http://www.science.gov/disclaimer.html">Privacy and Security</a></small> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center col-md-pull-4 footer-content-left"> <img src="https://www.osti.gov/images/DOE_SC31.png" alt="U.S. Department of Energy" usemap="#doe" height="31" width="177"><map style="display:none;" name="doe" id="doe"><area shape="rect" coords="1,3,107,30" href="http://www.energy.gov" alt="U.S. Deparment of Energy"><area shape="rect" coords="114,3,165,30" href="http://www.science.energy.gov" alt="Office of Science"></map> <a ref="http://www.osti.gov" style="margin-left: 15px;"><img src="https://www.osti.gov/images/footerimages/ostigov53.png" alt="Office of Scientific and Technical Information" height="31" width="53"></a> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center footer-content-right"> <a href="http://www.science.gov"><img src="https://www.osti.gov/images/footerimages/scigov77.png" alt="science.gov" height="31" width="98"></a> <a href="http://worldwidescience.org"><img src="https://www.osti.gov/images/footerimages/wws82.png" alt="WorldWideScience.org" height="31" width="90"></a> </div> </div> </div> </div> </div> <p><br></p> </div><!-- container --> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>