Sample records for multipoint-likelihood maximization mapping

  1. CARHTA GENE: multipopulation integrated genetic and radiation hybrid mapping.

    PubMed

    de Givry, Simon; Bouchez, Martin; Chabrier, Patrick; Milan, Denis; Schiex, Thomas

    2005-04-15

    CAR(H)(T)A GENE: is an integrated genetic and radiation hybrid (RH) mapping tool which can deal with multiple populations, including mixtures of genetic and RH data. CAR(H)(T)A GENE: performs multipoint maximum likelihood estimations with accelerated expectation-maximization algorithms for some pedigrees and has sophisticated algorithms for marker ordering. Dedicated heuristics for framework mapping are also included. CAR(H)(T)A GENE: can be used as a C++ library, through a shell command and a graphical interface. The XML output for companion tools is integrated. The program is available free of charge from www.inra.fr/bia/T/CarthaGene for Linux, Windows and Solaris machines (with Open Source). tschiex@toulouse.inra.fr.

  2. Lod scores for gene mapping in the presence of marker map uncertainty.

    PubMed

    Stringham, H M; Boehnke, M

    2001-07-01

    Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.

  3. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C [Sante Fe, NM

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  4. Distribution and magnitude of type I error of model-based multipoint lod scores: implications for multipoint mod scores.

    PubMed

    Xing, Chao; Elston, Robert C

    2006-07-01

    The multipoint lod score and mod score methods have been advocated for their superior power in detecting linkage. However, little has been done to determine the distribution of multipoint lod scores or to examine the properties of mod scores. In this paper we study the distribution of multipoint lod scores both analytically and by simulation. We also study by simulation the distribution of maximum multipoint lod scores when maximized over different penetrance models. The multipoint lod score is approximately normally distributed with mean and variance that depend on marker informativity, marker density, specified genetic model, number of pedigrees, pedigree structure, and pattern of affection status. When the multipoint lod scores are maximized over a set of assumed penetrances models, an excess of false positive indications of linkage appear under dominant analysis models with low penetrances and under recessive analysis models with high penetrances. Therefore, caution should be taken in interpreting results when employing multipoint lod score and mod score approaches, in particular when inferring the level of linkage significance and the mode of inheritance of a trait.

  5. Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis

    PubMed Central

    Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.

    2016-01-01

    Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498

  6. Online System for Faster Multipoint Linkage Analysis via Parallel Execution on Thousands of Personal Computers

    PubMed Central

    Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.

    2006-01-01

    Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644

  7. Distribution of Model-based Multipoint Heterogeneity Lod Scores

    PubMed Central

    Xing, Chao; Morris, Nathan; Xing, Guan

    2011-01-01

    The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ2 approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating the distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution 12χ02+12χ12, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. PMID:21104892

  8. Distribution of model-based multipoint heterogeneity lod scores.

    PubMed

    Xing, Chao; Morris, Nathan; Xing, Guan

    2010-12-01

    The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ(2) approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating th e distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution ½χ²₀+ ½χ²₁, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. © 2010 Wiley-Liss, Inc.

  9. Genetics of recurrent early-onset major depression (GenRED): significant linkage on chromosome 15q25-q26 after fine mapping with single nucleotide polymorphism markers.

    PubMed

    Levinson, Douglas F; Evgrafov, Oleg V; Knowles, James A; Potash, James B; Weissman, Myrna M; Scheftner, William A; Depaulo, J Raymond; Crowe, Raymond R; Murphy-Eberenz, Kathleen; Marta, Diana H; McInnis, Melvin G; Adams, Philip; Gladis, Madeline; Miller, Erin B; Thomas, Jo; Holmans, Peter

    2007-02-01

    The authors studied a dense map of single nucleotide polymorphism (SNP) DNA markers on chromosome 15q25-q26 to maximize the informativeness of genetic linkage analyses in a region where they previously reported suggestive evidence for linkage of recurrent early-onset major depressive disorder. In 631 European-ancestry families with multiple cases of recurrent early-onset major depressive disorder, 88 SNPs were genotyped, and multipoint allele-sharing linkage analyses were carried out. Marker-marker linkage disequilibrium was minimized, and a simulation study with founder haplotypes from these families suggested that linkage scores were not inflated by linkage disequilibrium. The dense SNP map increased the information content of the analysis from around 0.7 to over 0.9. The maximum evidence for linkage was the Z likelihood ratio score statistic of Kong and Cox (Z(LR))=4.69 at 109.8 cM. The exact p value was below the genomewide significance threshold. By contrast, in the genome scan with microsatellite markers at 9 cM spacing, the maximum Z(LR) for European-ancestry families was 3.43 (106.53 cM). It was estimated that the linked locus or loci in this region might account for a 20% or less populationwide increase in risk to siblings of cases. This region has produced modestly positive evidence for linkage to depression and related traits in other studies. These results suggest that DNA sequence variations in one or more genes in the 15q25-q26 region can increase susceptibility to major depression and that efforts are warranted to identify these genes.

  10. Free-form Airfoil Shape Optimization Under Uncertainty Using Maximum Expected Value and Second-order Second-moment Strategies

    NASA Technical Reports Server (NTRS)

    Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.

  11. A radiation hybrid map of the distal short arm of human chromosome II, containing the Beckwith-Weidemann and associated embroyonal tumor disease loci

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard, C.W. III; Berg, D.J.; Meeker, T.C.

    1993-05-01

    The authors describe a high-resolution radiation hybrid (RH) map of the distal short arm of human chromosome 11 containing the Beckwith-Weidemann gene and the associated embryonal tumor disease loci. Thirteen human 11p15 genes and 17 new anonymous probes were mapped by a statistical analysis of the cosegregation of markers in 102 rodent-human radiation hybrids retaining fragments of human chromosome 11. The 17 anonymous probes were generated from lambda phage containing human 11p15.5 inserts, by using ALU-PCR. A comprehensive map of all 30 loci and a framework map of nine clusters of loci ordered at odds of 1,000:1 were constructed bymore » a multipoint maximum-likelihood approach by using the computer program RHMAP. This RH map localizes one new gene to chromosome 11p15 (WEE1), provides more precise order information for several 11p15 genes (CTSD, H19, HPX,.ST5, RNH, and SMPD1), confirms previous map orders for other 11p15 genes (CALCA, PTH, HBBC, TH, HRAS, and DRD4), and maps 17 new anonymous probes within the 11p15.5 region. This RH map should prove useful in better defining the positions of the Beckwith-Weidemann and associated embryonal tumor disease-gene loci. 41 refs., 1 fig., 2 tabs.« less

  12. A study comparing precision of the maximum multipoint heterogeneity LOD statistic to three model-free multipoint linkage methods.

    PubMed

    Finch, S J; Chen, C H; Gordon, D; Mendell, N R

    2001-12-01

    This study compared the performance of the maximum lod (MLOD), maximum heterogeneity lod (MHLOD), maximum non-parametric linkage score (MNPL), maximum Kong and Cox linear extension (MKC(lin)) of NPL, and maximum Kong and Cox exponential extension (MKC(exp)) of NPL as calculated in Genehunter 1.2 and Genehunter-Plus. Our performance measure was the distance between the marker with maximum value for each linkage statistic and the trait locus. We performed a simulation study considering: 1) four modes of transmission, 2) 100 replicates for each model, 3) 58 pedigrees (with 592 subjects) per replicate, 4) three linked marker loci each having three equally frequent alleles, and 5) either 0% unlinked families (linkage homogeneity) or 50% unlinked families (linkage heterogeneity). For each replicate, we obtained the Haldane map position of the location at which each of the five statistics is maximized. The MLOD and MHLOD were obtained by maximizing over penetrances, phenocopy rate, and risk-allele frequencies. For the models simulated, MHLOD appeared to be the best statistic both in terms of identifying a marker locus having the smallest mean distance from the trait locus and in terms of the strongest negative correlation between maximum linkage statistic and distance of the identified position and the trait locus. The marker loci with maximum value of the Kong and Cox extensions of the NPL statistic also were closer to the trait locus than the marker locus with maximum value of the NPL statistic. Copyright 2001 Wiley-Liss, Inc.

  13. AFLP-based genetic mapping of the “bud-flowering” trait in heather (Calluna vulgaris)

    PubMed Central

    2013-01-01

    Background Calluna vulgaris is one of the most important landscaping plants produced in Germany. Its enormous economic success is due to the prolonged flower attractiveness of mutants in flower morphology, the so-called bud-bloomers. In this study, we present the first genetic linkage map of C. vulgaris in which we mapped a locus of the economically highly desired trait “flower type”. Results The map was constructed in JoinMap 4.1. using 535 AFLP markers from a single mapping population. A large fraction (40%) of markers showed distorted segregation. To test the effect of segregation distortion on linkage estimation, these markers were sorted regarding their segregation ratio and added in groups to the data set. The plausibility of group formation was evaluated by comparison of the “two-way pseudo-testcross” and the “integrated” mapping approach. Furthermore, regression mapping was compared to the multipoint-likelihood algorithm. The majority of maps constructed by different combinations of these methods consisted of eight linkage groups corresponding to the chromosome number of C. vulgaris. Conclusions All maps confirmed the independent inheritance of the most important horticultural traits “flower type”, “flower colour”, and “leaf colour”. An AFLP marker for the most important breeding target “flower type” was identified. The presented genetic map of C. vulgaris can now serve as a basis for further molecular marker selection and map-based cloning of the candidate gene encoding the unique flower architecture of C. vulgaris bud-bloomers. PMID:23915059

  14. A maximum likelihood algorithm for genome mapping of cytogenetic loci from meiotic configuration data.

    PubMed Central

    Reyes-Valdés, M H; Stelly, D M

    1995-01-01

    Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226

  15. Mapping of epistatic quantitative trait loci in four-way crosses.

    PubMed

    He, Xiao-Hong; Qin, Hongde; Hu, Zhongli; Zhang, Tianzhen; Zhang, Yuan-Ming

    2011-01-01

    Four-way crosses (4WC) involving four different inbred lines often appear in plant and animal commercial breeding programs. Direct mapping of quantitative trait loci (QTL) in these commercial populations is both economical and practical. However, the existing statistical methods for mapping QTL in a 4WC population are built on the single-QTL genetic model. This simple genetic model fails to take into account QTL interactions, which play an important role in the genetic architecture of complex traits. In this paper, therefore, we attempted to develop a statistical method to detect epistatic QTL in 4WC population. Conditional probabilities of QTL genotypes, computed by the multi-point single locus method, were used to sample the genotypes of all putative QTL in the entire genome. The sampled genotypes were used to construct the design matrix for QTL effects. All QTL effects, including main and epistatic effects, were simultaneously estimated by the penalized maximum likelihood method. The proposed method was confirmed by a series of Monte Carlo simulation studies and real data analysis of cotton. The new method will provide novel tools for the genetic dissection of complex traits, construction of QTL networks, and analysis of heterosis.

  16. Gaussian process inference for estimating pharmacokinetic parameters of dynamic contrast-enhanced MR images.

    PubMed

    Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M

    2012-01-01

    In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.

  17. Estimation of the effects of multipoint pacing on battery longevity in routine clinical practice.

    PubMed

    Akerström, Finn; Narváez, Irene; Puchol, Alberto; Pachón, Marta; Martín-Sierra, Cristina; Rodríguez-Mañero, Moisés; Rodríguez-Padial, Luis; Arias, Miguel A

    2017-09-23

    Multipoint pacing (MPP) permits simultaneous multisite pacing of the left ventricle (LV); initial studies suggest haemodynamic and clinical benefits over conventional (single LV site) cardiac resynchronization therapy (CRT). The aim of this study was to estimate the impact of MPP activation on battery longevity in routine clinical practice. Patient (n = 46) and device data were collected from two centres at least 3 months after MPP-CRT device implantation. Multipoint pacing programming was based on the maximal possible anatomical LV1/LV2 separation according to three predefined LV pacing capture threshold (PCT) cut-offs (≤1.5 V; ≤4.0 V; and ≤6.5 V). Estimated battery longevity was calculated using the programmed lower rate limit, lead impedances, outputs, and pacing percentages. Relative to the longevity for conventional CRT using the lowest PCT (8.9 ± 1.2 years), MPP activation significantly shortened battery longevity for all three PCT cut-offs (≤1.5 V, -5.6%; ≤4.0 V, -16.9%; ≤6.5 V, -21.3%; P's <0.001). When compared with conventional CRT based on longest right ventricle-LV delay (8.3 ± 1.3 years), battery longevity was significantly shortened for the MPP ≤ 4.0 V and ≤6.5 V cut-offs (-10.8 and -15.7%, respectively; P's <0.001). Maximal LV1/LV2 spacing was possible in 23.9% (≤1.5 V), 56.5% (≤4.0 V), and 69.6% (≤6.5 V) of patients. Multipoint pacing activation significantly reduces battery longevity compared with that for conventional CRT configuration. When reasonable MPP LV vector PCTs (≤4.0 V) are achieved, the decrease in battery longevity is relatively small which may prompt the clinician to activate MPP. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  18. PULMONARY CIRCULATION AT EXERCISE

    PubMed Central

    NAEIJE, R; CHESLER, N

    2012-01-01

    The pulmonary circulation is a high flow and low pressure circuit, with an average resistance of 1 mmHg.min.L−1 in young adults, increasing to 2.5 mmHg.min.L−1 over 4–6 decades of life. Pulmonary vascular mechanics at exercise are best described by distensible models. Exercise does not appear to affect the time constant of the pulmonary circulation or the longitudinal distribution of resistances. Very high flows are associated with high capillary pressures, up to a 20–25 mmHg threshold associated with interstitial lung edema and altered ventilation/perfusion relationships. Pulmonary artery pressures of 40–50 mmHg, which can be achieved at maximal exercise, may correspond to the extreme of tolerable right ventricular afterload. Distension of capillaries that decrease resistance may be of adaptative value during exercise, but this is limited by hypoxemia from altered diffusion/perfusion relationships. Exercise in hypoxia is associated with higher pulmonary vascular pressures and lower maximal cardiac output, with increased likelihood of right ventricular function limitation and altered gas exchange by interstitial lung edema. Pharmacological interventions aimed at the reduction of pulmonary vascular tone have little effect on pulmonary vascular pressure-flow relationships in normoxia, but may decrease resistance in hypoxia, unloading the right ventricle and thereby improving exercise capacity. Exercise in patients with pulmonary hypertension is associated with sharp increases in pulmonary artery pressure and a right ventricular limitation of aerobic capacity. Exercise stress testing to determine multipoint pulmonary vascular pressures-flow relationships may uncover early stage pulmonary vascular disease. PMID:23105961

  19. Genomewide Linkage Scan of 409 European-Ancestry and African American Families with Schizophrenia: Suggestive Evidence of Linkage at 8p23.3-p21.2 and 11p13.1-q14.1 in the Combined Sample

    PubMed Central

    Suarez, Brian K.; Duan, Jubao; Sanders, Alan R.; Hinrichs, Anthony L.; Jin, Carol H.; Hou, Cuiping; Buccola, Nancy G.; Hale, Nancy; Weilbaecher, Ann N.; Nertney, Deborah A.; Olincy, Ann; Green, Susan; Schaffer, Arthur W.; Smith, Christopher J.; Hannah, Dominique E.; Rice, John P.; Cox, Nancy J.; Martinez, Maria; Mowry, Bryan J.; Amin, Farooq; Silverman, Jeremy M.; Black, Donald W.; Byerley, William F.; Crowe, Raymond R.; Freedman, Robert; Cloninger, C. Robert; Levinson, Douglas F.; Gejman, Pablo V.

    2006-01-01

    We report the clinical characteristics of a schizophrenia sample of 409 pedigrees—263 of European ancestry (EA) and 146 of African American ancestry (AA)—together with the results of a genome scan (with a simple tandem repeat polymorphism interval of 9 cM) and follow-up fine mapping. A family was required to have a proband with schizophrenia (SZ) and one or more siblings of the proband with SZ or schizoaffective disorder. Linkage analyses included 403 independent full-sibling affected sibling pairs (ASPs) (279 EA and 124 AA) and 100 all-possible half-sibling ASPs (15 EA and 85 AA). Nonparametric multipoint linkage analysis of all families detected two regions with suggestive evidence of linkage at 8p23.3-q12 and 11p11.2-q22.3 (empirical Z likelihood-ratio score [Zlr] threshold ⩾2.65) and, in exploratory analyses, two other regions at 4p16.1-p15.32 in AA families and at 5p14.3-q11.2 in EA families. The most significant linkage peak was in chromosome 8p; its signal was mainly driven by the EA families. Zlr scores >2.0 in 8p were observed from 30.7 cM to 61.7 cM (Center for Inherited Disease Research map locations). The maximum evidence in the full sample was a multipoint Zlr of 3.25 (equivalent Kong-Cox LOD of 2.30) near D8S1771 (at 52 cM); there appeared to be two peaks, both telomeric to neuregulin 1 (NRG1). There is a paracentric inversion common in EA individuals within this region, the effect of which on the linkage evidence remains unknown in this and in other previously analyzed samples. Fine mapping of 8p did not significantly alter the significance or length of the peak. We also performed fine mapping of 4p16.3-p15.2, 5p15.2-q13.3, 10p15.3-p14, 10q25.3-q26.3, and 11p13-q23.3. The highest increase in Zlr scores was observed for 5p14.1-q12.1, where the maximum Zlr increased from 2.77 initially to 3.80 after fine mapping in the EA families. PMID:16400611

  20. Genomewide linkage scan of 409 European-ancestry and African American families with schizophrenia: suggestive evidence of linkage at 8p23.3-p21.2 and 11p13.1-q14.1 in the combined sample.

    PubMed

    Suarez, Brian K; Duan, Jubao; Sanders, Alan R; Hinrichs, Anthony L; Jin, Carol H; Hou, Cuiping; Buccola, Nancy G; Hale, Nancy; Weilbaecher, Ann N; Nertney, Deborah A; Olincy, Ann; Green, Susan; Schaffer, Arthur W; Smith, Christopher J; Hannah, Dominique E; Rice, John P; Cox, Nancy J; Martinez, Maria; Mowry, Bryan J; Amin, Farooq; Silverman, Jeremy M; Black, Donald W; Byerley, William F; Crowe, Raymond R; Freedman, Robert; Cloninger, C Robert; Levinson, Douglas F; Gejman, Pablo V

    2006-02-01

    We report the clinical characteristics of a schizophrenia sample of 409 pedigrees--263 of European ancestry (EA) and 146 of African American ancestry (AA)--together with the results of a genome scan (with a simple tandem repeat polymorphism interval of 9 cM) and follow-up fine mapping. A family was required to have a proband with schizophrenia (SZ) and one or more siblings of the proband with SZ or schizoaffective disorder. Linkage analyses included 403 independent full-sibling affected sibling pairs (ASPs) (279 EA and 124 AA) and 100 all-possible half-sibling ASPs (15 EA and 85 AA). Nonparametric multipoint linkage analysis of all families detected two regions with suggestive evidence of linkage at 8p23.3-q12 and 11p11.2-q22.3 (empirical Z likelihood-ratio score [Z(lr)] threshold >/=2.65) and, in exploratory analyses, two other regions at 4p16.1-p15.32 in AA families and at 5p14.3-q11.2 in EA families. The most significant linkage peak was in chromosome 8p; its signal was mainly driven by the EA families. Z(lr) scores >2.0 in 8p were observed from 30.7 cM to 61.7 cM (Center for Inherited Disease Research map locations). The maximum evidence in the full sample was a multipoint Z(lr) of 3.25 (equivalent Kong-Cox LOD of 2.30) near D8S1771 (at 52 cM); there appeared to be two peaks, both telomeric to neuregulin 1 (NRG1). There is a paracentric inversion common in EA individuals within this region, the effect of which on the linkage evidence remains unknown in this and in other previously analyzed samples. Fine mapping of 8p did not significantly alter the significance or length of the peak. We also performed fine mapping of 4p16.3-p15.2, 5p15.2-q13.3, 10p15.3-p14, 10q25.3-q26.3, and 11p13-q23.3. The highest increase in Z(lr) scores was observed for 5p14.1-q12.1, where the maximum Z(lr) increased from 2.77 initially to 3.80 after fine mapping in the EA families.

  1. Integrated photonics for fiber optic based temperature sensing

    NASA Astrophysics Data System (ADS)

    Evenblij, R. S.; van Leest, T.; Haverdings, M. B.

    2017-09-01

    One of the promising space applications areas for fibre sensing is high reliable thermal mapping of metrology structures for effects as thermal deformation, focal plane distortion, etc. Subsequently, multi-point temperature sensing capability for payload panels and instrumentation instead of, or in addition to conventional thermo-couple technology will drastically reduce electrical wiring and sensor materials to minimize weight and costs. Current fiber sensing technologies based on solid state ASPIC (Application Specific Photonic Integrated Circuits) technology, allow significant miniaturization of instrumentation and improved reliability. These imperative aspects make the technology candidate for applications in harsh environments such as space. One of the major aspects in order to mature ASPIC technology for space is assessment on radiation hardness. This paper describes the results of radiation hardness experiments on ASPIC including typical multipoint temperature sensing and thermal mapping capabilities.

  2. Immobilization and stabilization of pectinase by multipoint attachment onto an activated agar-gel support.

    PubMed

    Li, Tuoping; Li, Suhong; Wang, Na; Tain, Lirui

    2008-08-15

    Pectinase was immobilized on an activated agar-gel support by multipoint attachment. The maximal activity of immobilized pectinase was obtained at 5°C, pH 3.6, with a 24h reaction time at an enzyme dose of 0.52mg protein/g gel, and the gel was activated with 1.0M glycidol. These conditions increased the thermal stability of the immobilized pectinase 19-fold compared with the free enzyme at 65°C. The optimal temperature for pectinase activity changed from 40 to 50°C after immobilization; however, the optimal pH remained unchanged. The immobilized enzyme also exhibited great operational stability, and an 81% residual activity was observed in the immobilized enzyme after 10 batch reactions. Copyright © 2008 Elsevier Ltd. All rights reserved.

  3. Wildlife tradeoffs based on landscape models of habitat

    USGS Publications Warehouse

    Loehle, C.; Mitchell, M.S.

    2000-01-01

    It is becoming increasingly clear that the spatial structure of landscapes affects the habitat choices and abundance of wildlife. In contrast to wildlife management based on preservation of critical habitat features such as nest sites on a beach or mast trees, it has not been obvious how to incorporate spatial structure into management plans. We present techniques to accomplish this goal. We used multiscale logistic regression models developed previously for neotropical migrant bird species habitat use in South Carolina (USA) as a basis for these techniques. Based on these models we used a spatial optimization technique to generate optimal maps (probability of occurrence, P = 1.0) for each of seven species. To emulate management of a forest for maximum species diversity, we defined the objective function of the algorithm as the sum of probabilities over the seven species, resulting in a complex map that allowed all seven species to coexist. The map that allowed for coexistence is not obvious, must be computed algorithmically, and would be difficult to realize using rules of thumb for habitat management. To assess how management of a forest for a single species of interest might affect other species, we analyzed tradeoffs by gradually increasing the weighting on a single species in the objective function over a series of simulations. We found that as habitat was increasingly modified to favor that species, the probability of presence for two of the other species was driven to zero. This shows that whereas it is not possible to simultaneously maximize the likelihood of presence for multiple species with divergent habitat preferences, compromise solutions are possible at less than maximal likelihood in many cases. Our approach suggests that efficiency of habitat management for species diversity can by maximized for even small landscapes by incorporating spatial context. The methods we present are suitable for wildlife management, endangered species conservation, and nature reserve design.

  4. Initial results from a multi-point mapping observation of thundercloud high-energy radiation in coastal area of Japan sea

    NASA Astrophysics Data System (ADS)

    Wada, Y.; Enoto, T.; Furuta, Y.; Nakazawa, K.; Yuasa, T.; Okuda, K.; Makishima, K.; Nakano, T.; Umemoto, D.; Tsuchiya, H.

    2017-12-01

    On-ground detections of Thunderstorm Radiation Bursts (TRB) which mainly consist of bremsstrahlung gamma rays with energy extending up to 20 MeV indicate powerful electron accelerations inside thunderclouds or along lightning discharge paths (e.g. Torii et al., 2002, Tsuchiya et al., 2007, Dwyer et al., 2004). In order to resolve time variation and structure of the electron accelerators, we have constructed a multi-point mapping observation network with the aim of tracing gamma rays from moving thunderclouds since 2015. In fiscal 2016, we developed low cost and small size detectors dedicated to our observation. The data acquisition system records energy and timing of individual gamma-ray photons by 4-ch and 50 MHz sampling electrical boards (9.5 cm x 9.5 cm), coupled with BGO scintillator crystals. The systems were installed in portable water-proof boxes. We operated 10 detectors in two areas (Ishikawa and Niigata) along the coast of Japan Sea from October 2016 to April 2017. During this period, detectors in Ishikawa detected in total 10 TRBs lasting for several minutes associated with passage of a thundercloud. Our previous single-site measurement at Niigata, has recorded 1.4 TRBs per year on average in 2006-2015. Therefore, our new multi-point observation detected 7 times as many events as the previous system. One of the TRB gamma-ray spectra was fitted well by a cutoff power-law model. We performed a Monte Carlo simulation, and revealed that this spectrum was explained as bremsstrahlung of a monochromatic 15 MeV electron beam generated at an altitude of 500 m. We also succeeded in tracing gamma rays from an identical moving thundercloud with two detectors, demonstrating performance of the multi-point observation. In addition, we detected "short TRBs" lasting for a few hundred milliseconds associated with lightning discharges from four independent detectors placed 500 m apart simultaneously, in January and February 2017 at Niigata. The results in 2016-2017 winter season are proving that our multi-point observation can firmly detect a large number of TRBs and trace gamma rays from thunderstorms.

  5. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  6. High-Resolution Measurement of the Turbulent Frequency-Wavenumber Power Spectrum in a Laboratory Magnetosphere

    NASA Astrophysics Data System (ADS)

    Qian, T. M.; Mauel, M. E.

    2017-10-01

    In a laboratory magnetosphere, plasma is confined by a strong dipole magnet, where interchange and entropy mode turbulence can be studied and controlled in near steady-state conditions. Whole-plasma imaging shows turbulence dominated by long wavelength modes having chaotic amplitudes and phases. Here, we report for the first time, high-resolution measurement of the frequency-wavenumber power spectrum by applying the method of Capon to simultaneous multi-point measurement of electrostatic entropy modes using an array of floating potential probes. Unlike previously reported measurements in which ensemble correlation between two probes detected only the dominant wavenumber, Capon's ``maximum likelihood method'' uses all available probes to produce a frequency-wavenumber spectrum, showing the existence of modes propagating in both electron and ion magnetic drift directions. We also discuss the wider application of this technique to laboratory and magnetospheric plasmas with simultaneous multi-point measurements. Supported by NSF-DOE Partnership in Plasma Science Grant DE-FG02-00ER54585.

  7. Genome-wide linkage scan for submaximal exercise heart rate in the HERITAGE family study.

    PubMed

    Spielmann, Nadine; Leon, Arthur S; Rao, D C; Rice, Treva; Skinner, James S; Rankinen, Tuomo; Bouchard, Claude

    2007-12-01

    The purpose of this study was to identify regions of the human genome linked to submaximal exercise heart rates in the sedentary state and in response to a standardized 20-wk endurance training program in blacks and whites of the HERITAGE Family Study. A total of 701 polymorphic markers covering the 22 autosomes were used in the genome-wide linkage scan, with 328 sibling pairs from 99 white nuclear families and 102 pairs from 115 black family units. Steady-state heart rates were measured at the relative intensity of 60% maximal oxygen uptake (HR60) and at the absolute intensity of 50 W (HR50). Baseline phenotypes were adjusted for age, sex, and baseline body mass index (BMI) and training responses (posttraining minus baseline, Delta) were adjusted for age, sex, baseline BMI, and baseline value of the phenotype. Two analytic strategies were used, a multipoint variance components and a regression-based multipoint linkage analysis. In whites, promising linkages (LOD > 1.75) were identified on 18q21-q22 for baseline HR50 (LOD = 2.64; P = 0.0002) and DeltaHR60 (LOD = 2.10; P = 0.0009) and on chromosome 2q33.3 for DeltaHR50 (LOD = 2.13; P = 0.0009). In blacks, evidence of promising linkage for baseline HR50 was detected with several markers within the chromosomal region 10q24-q25.3 (peak LOD = 2.43, P = 0.0004 with D10S597). The most promising regions for fine mapping in the HERITAGE Family Study were found on 2q33 for HR50 training response in whites, on 10q25-26 for baseline HR60 in blacks, and on 18q21-22 for both baseline HR50 and DeltaHR60 in whites.

  8. Methodology and method and apparatus for signaling with capacity optimized constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)

    2011-01-01

    Communication systems having transmitter, includes a coder configured to receive user bits and output encoded bits at an expanded output encoded bit rate, a mapper configured to map encoded bits to symbols in a symbol constellation, a modulator configured to generate a signal for transmission via the communication channel using symbols generated by the mapper. In addition, the receiver includes a demodulator configured to demodulate the received signal via the communication channel, a demapper configured to estimate likelihoods from the demodulated signal, a decoder that is configured to estimate decoded bits from the likelihoods generated by the demapper. Furthermore, the symbol constellation is a capacity optimized geometrically spaced symbol constellation that provides a given capacity at a reduced signal-to-noise ratio compared to a signal constellation that maximizes d.sub.min.

  9. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.

  10. On non-parametric maximum likelihood estimation of the bivariate survivor function.

    PubMed

    Prentice, R L

    The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.

  11. Self-Powered High-Resolution and Pressure-Sensitive Triboelectric Sensor Matrix for Real-Time Tactile Mapping.

    PubMed

    Wang, Xiandi; Zhang, Hanlu; Dong, Lin; Han, Xun; Du, Weiming; Zhai, Junyi; Pan, Caofeng; Wang, Zhong Lin

    2016-04-20

    A triboelectric sensor matrix (TESM) can accurately track and map 2D tactile sensing. A self-powered, high-resolution, pressure-sensitive, flexible and durable TESM with 16 × 16 pixels is fabricated for the fast detection of single-point and multi-point touching. Using cross-locating technology, a cross-type TESM with 32 × 20 pixels is developed for more rapid tactile mapping, which significantly reduces the addressing lines from m × n to m + n. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Capturing rogue waves by multi-point statistics

    NASA Astrophysics Data System (ADS)

    Hadjihosseini, A.; Wächter, Matthias; Hoffmann, N. P.; Peinke, J.

    2016-01-01

    As an example of a complex system with extreme events, we investigate ocean wave states exhibiting rogue waves. We present a statistical method of data analysis based on multi-point statistics which for the first time allows the grasping of extreme rogue wave events in a highly satisfactory statistical manner. The key to the success of the approach is mapping the complexity of multi-point data onto the statistics of hierarchically ordered height increments for different time scales, for which we can show that a stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. With this stochastic description surrogate data sets can in turn be generated, which makes it possible to work out arbitrary statistical features of the complex sea state in general, and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics.

  13. Linkage disequilibrium interval mapping of quantitative trait loci.

    PubMed

    Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte

    2006-03-16

    For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.

  14. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    PubMed

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Anaysis of the quality of image data required by the LANDSAT-4 Thematic Mapper and Multispectral Scanner. [agricultural and forest cover types in California

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1984-01-01

    The spatial, geometric, and radiometric qualities of LANDSAT 4 thematic mapper (TM) and multispectral scanner (MSS) data were evaluated by interpreting, through visual and computer means, film and digital products for selected agricultural and forest cover types in California. Multispectral analyses employing Bayesian maximum likelihood, discrete relaxation, and unsupervised clustering algorithms were used to compare the usefulness of TM and MSS data for discriminating individual cover types. Some of the significant results are as follows: (1) for maximizing the interpretability of agricultural and forest resources, TM color composites should contain spectral bands in the visible, near-reflectance infrared, and middle-reflectance infrared regions, namely TM 4 and TM % and must contain TM 4 in all cases even at the expense of excluding TM 5; (2) using enlarged TM film products, planimetric accuracy of mapped poins was within 91 meters (RMSE east) and 117 meters (RMSE north); (3) using TM digital products, planimetric accuracy of mapped points was within 12.0 meters (RMSE east) and 13.7 meters (RMSE north); and (4) applying a contextual classification algorithm to TM data provided classification accuracies competitive with Bayesian maximum likelihood.

  16. Evidence for bivariate linkage of obesity and HDL-C levels in the Framingham Heart Study.

    PubMed

    Arya, Rector; Lehman, Donna; Hunt, Kelly J; Schneider, Jennifer; Almasy, Laura; Blangero, John; Stern, Michael P; Duggirala, Ravindranath

    2003-12-31

    Epidemiological studies have indicated that obesity and low high-density lipoprotein (HDL) levels are strong cardiovascular risk factors, and that these traits are inversely correlated. Despite the belief that these traits are correlated in part due to pleiotropy, knowledge on specific genes commonly affecting obesity and dyslipidemia is very limited. To address this issue, we first conducted univariate multipoint linkage analysis for body mass index (BMI) and HDL-C to identify loci influencing variation in these phenotypes using Framingham Heart Study data relating to 1702 subjects distributed across 330 pedigrees. Subsequently, we performed bivariate multipoint linkage analysis to detect common loci influencing covariation between these two traits. We scanned the genome and identified a major locus near marker D6S1009 influencing variation in BMI (LOD = 3.9) using the program SOLAR. We also identified a major locus for HDL-C near marker D2S1334 on chromosome 2 (LOD = 3.5) and another region near marker D6S1009 on chromosome 6 with suggestive evidence for linkage (LOD = 2.7). Since these two phenotypes have been independently mapped to the same region on chromosome 6q, we used the bivariate multipoint linkage approach using SOLAR. The bivariate linkage analysis of BMI and HDL-C implicated the genetic region near marker D6S1009 as harboring a major gene commonly influencing these phenotypes (bivariate LOD = 6.2; LODeq = 5.5) and appears to improve power to map the correlated traits to a region, precisely. We found substantial evidence for a quantitative trait locus with pleiotropic effects, which appears to influence both BMI and HDL-C phenotypes in the Framingham data.

  17. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  18. A Novel Form of “Central Pouchlike” Cataract, with Sutural Opacities, Maps to Chromosome 15q21-22

    PubMed Central

    Vanita; Singh, Jai Rup; Sarhadi, Virinder K.; Singh, Daljit; Reis, André; Rueschendorf, Franz; Becker-Follmann, Johannes; Jung, Martin; Sperling, Karl

    2001-01-01

    Congenital cataract is a clinically and genetically highly heterogeneous eye disorder, with autosomal dominant inheritance being most common. We investigated a large seven-generation family with 74 individuals affected by autosomal dominant congenital cataract (ADCC). The phenotype in this family can be described as “central pouchlike” cataract with sutural opacities, and it differs from the other mapped cataracts. We performed linkage analysis with microsatellite markers in this family and excluded the known candidate genes. A genomewide search revealed linkage to markers on chromosome 15, with a maximum two-point LOD score of 5.98 at θ=0 with marker D15S117. Multipoint analysis also gave a maximum LOD score of 5.98 at D15S117. Multipoint and haplotype analysis narrowed the cataract locus to a 10-cM region between markers D15S209 and D15S1036, closely linked to marker D15S117 in q21-q22 region of chromosome 15. This is the first report of a gene for a clinically new type of ADCC at 15q21-22 locus. PMID:11133359

  19. A maximum likelihood map of chromosome 1.

    PubMed Central

    Rao, D C; Keats, B J; Lalouel, J M; Morton, N E; Yee, S

    1979-01-01

    Thirteen loci are mapped on chromosome 1 from genetic evidence. The maximum likelihood map presented permits confirmation that Scianna (SC) and a fourteenth locus, phenylketonuria (PKU), are on chromosome 1, although the location of the latter on the PGM1-AMY segment is uncertain. Eight other controversial genetic assignments are rejected, providing a practical demonstration of the resolution which maximum likelihood theory brings to mapping. PMID:293128

  20. Frameshift Suppression in SACCHAROMYCES CEREVISIAE VI. Complete Genetic Map of Twenty-Five Suppressor Genes

    PubMed Central

    Gaber, Richard F.; Mathison, Lorilee; Edelman, Irv; Culbertson, Michael R.

    1983-01-01

    Five previously unmapped frameshift suppressor genes have been located on the yeast genetic map. In addition, we have further characterized the map positions of two suppressors whose approximate locations were determined in an earlier study. These results represent the completion of genetic mapping studies on all 25 of the known frameshift suppressor genes in yeast.—The approximate location of each suppressor gene was initially determined through the use of a set of mapping strains containing 61 signal markers distributed throughout the yeast genome. Standard meiotic linkage was assayed in crosses between strains carrying the suppressors and the mapping strains. Subsequent to these approximate linkage determinations, each suppressor gene was more precisely located in multi-point crosses. The implications of these mapping results for the genomic distribution of frameshift suppressor genes, which include both glycine and proline tRNA genes, are discussed. PMID:17246112

  1. Optical Measurements in a Combustor Using a 9-Point Swirl-Venturi Fuel Injector

    NASA Technical Reports Server (NTRS)

    Hicks, Yolanda R.; Anderson, Robert C.; Locke, Randy J.

    2007-01-01

    This paper highlights the use of two-dimensional data to characterize a multipoint swirl-venturi injector. The injector is based on a NASA-conceived lean direct injection concept. Using a variety of advanced optical diagnostic techniques, we examine the flows resultant from multipoint, lean-direct injectors that have nine injection sites arranged in a 3 x 3 grid. The measurements are made within an optically-accessible, jet-A-fueled, 76-mm by 76-mm flame tube combustor. Combustion species mapping and velocity measurements are obtained using planar laser-induced fluorescence of OH and fuel, planar laser scatter of liquid fuel, chemiluminescence from CH*, NO*, and OH*, and particle image velocimetry of seeded air (non-fueled). These measurements are used to study fuel injection, mixedness, and combustion processes and are part of a database of measurements that will be used for validating computational combustion models.

  2. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  3. The gene for creatine kinase, mitochondrial 2 (sarcomeric; CKMT2), maps to chromosome 5q13. 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard, I.; Devaud, C.; Cherif, D.

    1993-10-01

    YAC clones for the creatine kinase, mitochrondial 2 (sarcomeric; CKMT2), gene were isolated. One of these YACs was localized on chromosome 5q13.3 by fluorescence in situ hybridization. A polymorphic dinucleotide repeat (heterozygosity 0.77) was identified within the seventh intron of the CKMT2 gene. Genotyping of CEPH families allowed positioning of CKMT2 on the multipoint map of chromosome 5 between D5S424 and D5S428, distal to spinal muscular atrophy (SMA) (5q12-q14). 8 refs., 1 fig., 2 tabs.

  4. Sparsity-constrained PET image reconstruction with learned dictionaries

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.

  5. Model reduction of nonsquare linear MIMO systems using multipoint matrix continued-fraction expansions

    NASA Technical Reports Server (NTRS)

    Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San

    1994-01-01

    This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.

  6. Examination of X chromosome markers in Rett syndrome: Exclusion mapping with a novel variation on multilocus linkage analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellison, K.A.; Fill, C.P.; Terwililger, J.

    Rett syndrome is a neurologic disorder characterized by early normal development followed by regression, acquired deceleration of head growth, autism, ataxia, and sterotypic hand movements. The exclusive occurrence of the syndrome in females and the occurrence of a few familial cases with inheritance through maternal lines suggest that this disorder is most likely secondary to a mutation on the X chromosome. To address this hypothesis and to identify candidate regions for the Rett syndrome gene locus, genotypic analysis was performed in two families with maternally related affected half-sisters by using 63 DNA markers from the X chromosome. Nineteen of themore » loci studied were chosen for multipoint linkage analysis because they have been previously genetically mapped using a large number of meioses from reference families. Using the exclusion criterion of a lod score less than [minus]2, the authors were able to exclude the region between the Duchenne muscular dystrophy locus and the DXS456 locus. This region extends from Xp21.2 to Xq21-q23. The use of the multipoint linkage analysis approach outlined in this study should allow the exclusion of additional regions of the X chromosome as new markers are analyzed.« less

  7. The autosomal dominant familial exudative vitreoretinopathy locus maps on 11q and is closely linked to D11S533

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuen; Schwinger, D.; Gal, A.

    1992-10-01

    Autosomal dominant familial exudative vitreoretinopathy (adFEVR) is a hereditary disorder characterized by the incomplete vascularization of the peripheral retina. The primary biochemical defect in adFEVR is unknown. The adFEVR locus has tentatively been assigned to 11q by linkage studies. The authors report the results of an extended multipoint linkage analysis of two families with adFEVR by using five markers (INT2, D11S533, D11S527, D11S35, and CD3D) from 11q13-q23. Pairwise linkage data obtained in the two families were rather similar and hence have not provided evidence for genetic heterogeneity. The highest compiled two-point lod score (3.67, at a recombination fraction of .07)more » was obtained for the disease locus versus D11S533. Multipoint analyses showed that the adFEVR locus maps most likely, with a maximum location score of over 20, between D11S533/D11S526 and D11S35, at recombination rates of .147 and .104, respectively. Close linkage without recombination (maximum lod score 11.26) has been found between D11S533 and D11S526. 15 refs., 3 figs., 4 tabs.« less

  8. Distributed multimodal data fusion for large scale wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Ertin, Emre

    2006-05-01

    Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.

  9. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  10. New spatial upscaling methods for multi-point measurements: From normal to p-normal

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Li, Xin

    2017-12-01

    Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.

  11. Freestream Effects on Boundary Layer Disturbances for HIFiRE-5 (Postprint)

    DTIC Science & Technology

    2015-01-01

    hypersonic wind tunnel . For Mach 6.5 and 7, there was no evidence of traveling crossflow waves . However, higher-frequency disturbances were observed. These...disturbances. The structure of these disturbances (phase velocity and wave angle) is similar in both wind tunnels . Coherence measurements in the ACE show that...These include detailed mapping of disturbance fields in the wind tunnel , including higher-frequency measurements, multi-point probe mea- surements to

  12. Refined genetic mapping of X-linked Charcot-Marie-Tooth neuropathy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fain, P.R.; Barker, D.F.; Chance, P.F.

    1994-02-01

    Genetic linkage studies were conducted in four multigenerational families with X-linked Charcot-Marie-Tooth disease (CMTX), using 12 highly polymorphic short-tandem-repeat markers for the pericentromeric region of the X Chromosome. Pairwise linkage analysis with individual markers confirmed tight linkage of CMTX to the pericentromeric region in each family. Multipoint analyses strongly support the order DXS337-CMTX-DXS441-(DXS56, PGK1). 38 refs., 2 figs., 1 tab.

  13. Deterministic quantum annealing expectation-maximization algorithm

    NASA Astrophysics Data System (ADS)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  14. A Comprehensive Linkage Map of the Dog Genome

    PubMed Central

    Wong, Aaron K.; Ruhe, Alison L.; Dumont, Beth L.; Robertson, Kathryn R.; Guerrero, Giovanna; Shull, Sheila M.; Ziegle, Janet S.; Millon, Lee V.; Broman, Karl W.; Payseur, Bret A.; Neff, Mark W.

    2010-01-01

    We have leveraged the reference sequence of a boxer to construct the first complete linkage map for the domestic dog. The new map improves access to the dog's unique biology, from human disease counterparts to fascinating evolutionary adaptations. The map was constructed with ∼3000 microsatellite markers developed from the reference sequence. Familial resources afforded 450 mostly phase-known meioses for map assembly. The genotype data supported a framework map with ∼1500 loci. An additional ∼1500 markers served as map validators, contributing modestly to estimates of recombination rate but supporting the framework content. Data from ∼22,000 SNPs informing on a subset of meioses supported map integrity. The sex-averaged map extended 21 M and revealed marked region- and sex-specific differences in recombination rate. The map will enable empiric coverage estimates and multipoint linkage analysis. Knowledge of the variation in recombination rate will also inform on genomewide patterns of linkage disequilibrium (LD), and thus benefit association, selective sweep, and phylogenetic mapping approaches. The computational and wet-bench strategies can be applied to the reference genome of any nonmodel organism to assemble a de novo linkage map. PMID:19966068

  15. IMNN: Information Maximizing Neural Networks

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  16. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavel, D.T.

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  17. Quantum-state reconstruction by maximizing likelihood and entropy.

    PubMed

    Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk

    2011-07-08

    Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored.

  18. Crossover and maximal fat-oxidation points in sedentary healthy subjects: methodological issues.

    PubMed

    Gmada, N; Marzouki, H; Haboubi, M; Tabka, Z; Shephard, R J; Bouhlel, E

    2012-02-01

    Our study aimed to assess the influence of protocol on the crossover point and maximal fat-oxidation (LIPOX(max)) values in sedentary, but otherwise healthy, young men. Maximal oxygen intake was assessed in 23 subjects, using a progressive maximal cycle ergometer test. Twelve sedentary males (aged 20.5±1.0 years) whose directly measured maximal aerobic power (MAP) values were lower than their theoretical maximal values (tMAP) were selected from this group. These individuals performed, in random sequence, three submaximal graded exercise tests, separated by three-day intervals; work rates were based on the tMAP in one test and on MAP in the remaining two. The third test was used to assess the reliability of data. Heart rate, respiratory parameters, blood lactate, the crossover point and LIPOX(max) values were measured during each of these tests. The crossover point and LIPOX(max) values were significantly lower when the testing protocol was based on tMAP rather than on MAP (P<0.001). Respiratory exchange ratios were significantly lower with MAP than with tMAP at 30, 40, 50 and 60% of maximal aerobic power (P<0.01). At the crossover point, lactate and 5-min postexercise oxygen consumption (EPOC(5 min)) values were significantly higher using tMAP rather than MAP (P<0.001). During the first 5 min of recovery, EPOC(5 min) and blood lactate were significantly correlated (r=0.89; P<0.001). Our data show that, to assess the crossover point and LIPOX(max) values for research purposes, the protocol must be based on the measured MAP rather than on a theoretical value. Such a determination should improve individualization of training for initially sedentary subjects. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  19. Multi-Contrast Multi-Atlas Parcellation of Diffusion Tensor Imaging of the Human Brain

    PubMed Central

    Tang, Xiaoying; Yoshida, Shoko; Hsu, John; Huisman, Thierry A. G. M.; Faria, Andreia V.; Oishi, Kenichi; Kutten, Kwame; Poretti, Andrea; Li, Yue; Miller, Michael I.; Mori, Susumu

    2014-01-01

    In this paper, we propose a novel method for parcellating the human brain into 193 anatomical structures based on diffusion tensor images (DTIs). This was accomplished in the setting of multi-contrast diffeomorphic likelihood fusion using multiple DTI atlases. DTI images are modeled as high dimensional fields, with each voxel exhibiting a vector valued feature comprising of mean diffusivity (MD), fractional anisotropy (FA), and fiber angle. For each structure, the probability distribution of each element in the feature vector is modeled as a mixture of Gaussians, the parameters of which are estimated from the labeled atlases. The structure-specific feature vector is then used to parcellate the test image. For each atlas, a likelihood is iteratively computed based on the structure-specific vector feature. The likelihoods from multiple atlases are then fused. The updating and fusing of the likelihoods is achieved based on the expectation-maximization (EM) algorithm for maximum a posteriori (MAP) estimation problems. We first demonstrate the performance of the algorithm by examining the parcellation accuracy of 18 structures from 25 subjects with a varying degree of structural abnormality. Dice values ranging 0.8–0.9 were obtained. In addition, strong correlation was found between the volume size of the automated and the manual parcellation. Then, we present scan-rescan reproducibility based on another dataset of 16 DTI images – an average of 3.73%, 1.91%, and 1.79% for volume, mean FA, and mean MD respectively. Finally, the range of anatomical variability in the normal population was quantified for each structure. PMID:24809486

  20. Multipoint Fuel Injection Arrangements

    NASA Technical Reports Server (NTRS)

    Prociw, Lev Alexander (Inventor)

    2017-01-01

    A multipoint fuel injection system includes a plurality of fuel manifolds. Each manifold is in fluid communication with a plurality of injectors arranged circumferentially about a longitudinal axis for multipoint fuel injection. The injectors of separate respective manifolds are spaced radially apart from one another for separate radial staging of fuel flow to each respective manifold.

  1. Genome-wide and fine-resolution association analysis of malaria in West Africa.

    PubMed

    Jallow, Muminatou; Teo, Yik Ying; Small, Kerrin S; Rockett, Kirk A; Deloukas, Panos; Clark, Taane G; Kivinen, Katja; Bojang, Kalifa A; Conway, David J; Pinder, Margaret; Sirugo, Giorgio; Sisay-Joof, Fatou; Usen, Stanley; Auburn, Sarah; Bumpstead, Suzannah J; Campino, Susana; Coffey, Alison; Dunham, Andrew; Fry, Andrew E; Green, Angela; Gwilliam, Rhian; Hunt, Sarah E; Inouye, Michael; Jeffreys, Anna E; Mendy, Alieu; Palotie, Aarno; Potter, Simon; Ragoussis, Jiannis; Rogers, Jane; Rowlands, Kate; Somaskantharajah, Elilan; Whittaker, Pamela; Widden, Claire; Donnelly, Peter; Howie, Bryan; Marchini, Jonathan; Morris, Andrew; SanJoaquin, Miguel; Achidi, Eric Akum; Agbenyega, Tsiri; Allen, Angela; Amodu, Olukemi; Corran, Patrick; Djimde, Abdoulaye; Dolo, Amagana; Doumbo, Ogobara K; Drakeley, Chris; Dunstan, Sarah; Evans, Jennifer; Farrar, Jeremy; Fernando, Deepika; Hien, Tran Tinh; Horstmann, Rolf D; Ibrahim, Muntaser; Karunaweera, Nadira; Kokwaro, Gilbert; Koram, Kwadwo A; Lemnge, Martha; Makani, Julie; Marsh, Kevin; Michon, Pascal; Modiano, David; Molyneux, Malcolm E; Mueller, Ivo; Parker, Michael; Peshu, Norbert; Plowe, Christopher V; Puijalon, Odile; Reeder, John; Reyburn, Hugh; Riley, Eleanor M; Sakuntabhai, Anavaj; Singhasivanon, Pratap; Sirima, Sodiomon; Tall, Adama; Taylor, Terrie E; Thera, Mahamadou; Troye-Blomberg, Marita; Williams, Thomas N; Wilson, Michael; Kwiatkowski, Dominic P

    2009-06-01

    We report a genome-wide association (GWA) study of severe malaria in The Gambia. The initial GWA scan included 2,500 children genotyped on the Affymetrix 500K GeneChip, and a replication study included 3,400 children. We used this to examine the performance of GWA methods in Africa. We found considerable population stratification, and also that signals of association at known malaria resistance loci were greatly attenuated owing to weak linkage disequilibrium (LD). To investigate possible solutions to the problem of low LD, we focused on the HbS locus, sequencing this region of the genome in 62 Gambian individuals and then using these data to conduct multipoint imputation in the GWA samples. This increased the signal of association, from P = 4 × 10(-7) to P = 4 × 10(-14), with the peak of the signal located precisely at the HbS causal variant. Our findings provide proof of principle that fine-resolution multipoint imputation, based on population-specific sequencing data, can substantially boost authentic GWA signals and enable fine mapping of causal variants in African populations.

  2. A recoding scheme for X-linked and pseudoautosomal loci to be used with computer programs for autosomal LOD-score analysis.

    PubMed

    Strauch, Konstantin; Baur, Max P; Wienker, Thomas F

    2004-01-01

    We present a recoding scheme that allows for a parametric multipoint X-chromosomal linkage analysis of dichotomous traits in the context of a computer program for autosomes that can use trait models with imprinting. Furthermore, with this scheme, it is possible to perform a joint multipoint analysis of X-linked and pseudoautosomal loci. It is required that (1) the marker genotypes of all female nonfounders are available and that (2) there are no male nonfounders who have daughters in the pedigree. The second requirement does not apply if the trait locus is pseudoautosomal. The X-linked marker loci are recorded by adding a dummy allele to the males' hemizygous genotypes. For modelling an X-linked trait locus, five different liability classes are defined, in conjunction with a paternal imprinting model for male nonfounders. The formulation aims at the mapping of a diallelic trait locus relative to an arbitrary number of codominant markers with known genetic distances, in cases where a program for a genuine X-chromosomal analysis is not available. 2004 S. Karger AG, Basel.

  3. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  4. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  5. Mapping the Dark Matter with 6dFGS

    NASA Astrophysics Data System (ADS)

    Mould, Jeremy R.; Magoulas, C.; Springob, C.; Colless, M.; Jones, H.; Lucey, J.; Erdogdu, P.; Campbell, L.

    2012-05-01

    Fundamental plane distances from the 6dF Galaxy Redshift Survey are fitted to a model of the density field within 200/h Mpc. Likelihood is maximized for a single value of the local galaxy density, as expected in linear theory for the relation between overdensity and peculiar velocity. The dipole of the inferred southern hemisphere early type galaxy peculiar velocities is calculated within 150/h Mpc, before and after correction for the individual galaxy velocities predicted by the model. The former agrees with that obtained by other peculiar velocity studies (e.g. SFI++). The latter is only of order 150 km/sec and consistent with the expectations of the standard cosmological model and recent forecasts of the cosmic mach number, which show linearly declining bulk flow with increasing scale.

  6. Robust Likelihoods for Inflationary Gravitational Waves from Maps of Cosmic Microwave Background Polarization

    NASA Technical Reports Server (NTRS)

    Switzer, Eric Ryan; Watts, Duncan J.

    2016-01-01

    The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.

  7. Effect of age and gender on the number of motor units in healthy subjects estimated by the multipoint incremental MUNE method.

    PubMed

    Gawel, Malgorzata; Kostera-Pruszczyk, Anna

    2014-06-01

    Motor unit number estimation (MUNE) is a tool for estimating the number of motor units. The aim was to evaluate the multipoint incremental MUNE method in a healthy population, to analyze whether aging, gender, and the dominant hand side influence the motor unit number, and to assess reproducibility of MUNE with the Shefner modification. We studied 60 volunteers (mean age, 47 ± 17.7 years) in four groups aged 18 to 30, 31 to 45, 46 to 60, and above 60 years. Motor unit number estimation was calculated in the abductor pollicis brevis (APB) and the abductor digiti minimi (ADM) by dividing the single motor unit action potential amplitude into the maximal compound motor action potential amplitude. Test-retest variability was 7%. The mean value of MUNE for APB was 133.2 ± 43 and for ADM was 157.1 ± 39.4. Significant differences in MUNE results were found between groups aged 18 to 30 and 60 years or older and between groups aged 31 to 45 and 60 years or older. Motor unit number estimation results correlated negatively with the age of subjects for both APB and ADM. Single motor unit action potential, reflecting the size of motor unit, increased with the age of subjects only in APB. Compound motor action potential amplitude correlated negatively with the age of subjects in APB and ADM. Significant correlations were seen between MUNE in APB or ADM and compound motor action potential amplitude in these muscles and the age of female subjects. A similar relationship was not found in males. Multipoint incremental MUNE method with the Shefner modification is a noninvasive, easy to perform method with high reproducibility. The loss of motor neurons because of aging could be confirmed by our MUNE study and seems to be more pronounced in females.

  8. Localization of A Novel Autosomal Recessive Non-Syndromic Hearing Impairment Locus (DFNB38) to 6q26–q27 in a Consanguineous Kindred from Pakistan

    PubMed Central

    Ansar, Muhammad; Ramzan, Mohammad; Pham, Thanh L.; Yan, Kai; Jamal, Syed Muhammad; Haque, Sayedul; Ahmad, Wasim; Leal, Suzanne M.

    2010-01-01

    For autosomal recessive nonsyndromic hearing impairment over 30 loci have been mapped and 19 genes have been identified. DFNB38, a novel locus for autosomal recessive nonsyndromic hearing impairment, was localized in a consanguineous Pakistani kindred to 6q26–q27. The affected family members present with profound prelingual sensorineural hearing impairment and use sign language for communications. Linkage was established to microsatellite markers located on chromosome 6q26–q27 (Multipoint lod score 3.6). The genetic region for DFNB38 spans 10.1 cM according to the Marshfield genetic map and is bounded by markers D6S980 and D6S1719. This genetic region corresponds to 3.4 MB on the sequence-based physical map. PMID:12890929

  9. Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data

    ERIC Educational Resources Information Center

    Xi, Nuo; Browne, Michael W.

    2014-01-01

    A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…

  10. Computation of wheel-rail contact force for non-mapping wheel-rail profile of Translohr tram

    NASA Astrophysics Data System (ADS)

    Ji, Yuanjin; Ren, Lihui; Zhou, Jinsong

    2017-09-01

    Translohr tram has steel wheels, in V-like arrangements, as guide wheels. These operate over the guide rails in inverted-V arrangements. However, the horizontal and vertical coordinates of the guide wheels and guide rails are not always mapped one-to-one. In this study, a simplified elastic method is proposed in order to calculate the contact points between the wheels and the rails. By transforming the coordinates, the non-mapping geometric relationship between wheel and rail is converted into a mapping relationship. Considering the Translohr tram's multi-point contact between the guide wheel and the guide rail, the elastic-contact hypothesis take into account the existence of contact patches between the bodies, and the location of the contact points is calculated using a simplified elastic method. In order to speed up the calculation, a multi-dimensional contact table is generated, enabling the use of simulation for Translohr tram running on curvatures with different radii.

  11. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets

    PubMed Central

    Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.

    2016-01-01

    Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584

  12. TLNS3D/CDISC Multipoint Design of the TCA Concept

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Mann, Michael J.

    1999-01-01

    This paper presents the work done to date by the authors on developing an efficient approach to multipoint design and applying it to the design of the HSR TCA (High Speed Research Technology Concept Aircraft) configuration. While the title indicates that this exploratory study has been performed using the TLNS3DMB flow solver and the CDISC (Constrained Direct Iterative Surface Curvature) design method, the CDISC method could have been used with any flow solver, and the multipoint design approach does not require the use of CDISC. The goal of the study was to develop a multipoint design method that could achieve a design in about the same time as 10 analysis runs.

  13. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  14. Efficient Bit-to-Symbol Likelihood Mappings

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  15. Contamination of food products with Mycobacterium avium paratuberculosis: a systematic review.

    PubMed

    Eltholth, M M; Marsh, V R; Van Winden, S; Guitian, F J

    2009-10-01

    Although a causal link between Mycobacterium avium subspecies paratuberculosis (MAP) and Crohn's disease has not been proved, previous studies suggest that the potential routes of human exposure to MAP should be investigated. We conducted a systematic review of literature concerning the likelihood of contamination of food products with MAP and the likely changes in the quantity of MAP in dairy and meat products along their respective production chains. Relevant data were extracted from 65 research papers and synthesized qualitatively. Although estimates of the prevalence of Johne's disease are scarce, particularly for non-dairy herds, the available data suggest that the likelihood of contamination of raw milk with MAP in most studied regions is substantial. The presence of MAP in raw and pasteurized milk has been the subject of several studies which show that pasteurized milk is not always MAP-free and that the effectiveness of pasteurization in inactivating MAP depends on the initial concentration of the agent in raw milk. The most recent studies indicated that beef can be contaminated with MAP via dissemination of the pathogen in the tissues of infected animals. Currently available data suggests that the likelihood of dairy and meat products being contaminated with MAP on retail sale should not be ignored.

  16. Fine genetic mapping of the gene for nevoid basal cell carcinoma syndrome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wicking, C.; Berkman, J.; Wainwright, B.

    1994-08-01

    Nevoid basal cell carcinoma syndrome (NBCCS, or Gorlin syndrome) is a cancer predisposition syndrome characterized by multiple basal cell carcinomas and diverse developmental defects. The gene responsible for NBCCS, which is most likely to be a tumor suppressor gene, has previously been mapped to 9q22.3-q31 in a 12-cM interval between the microsatellite marker loci D9S12.1 and D9S109. Combined multipoint and haplotype analyses of additional polymorphisms in this region in our collection of Australasian pedigrees have further refined the localization of the gene to between the markers D9S196 and D9S180, an interval reported to be approximately 2 cM. 27 refs., 4more » figs., 1 tab.« less

  17. Fast estimation of diffusion tensors under Rician noise by the EM algorithm.

    PubMed

    Liu, Jia; Gasbarra, Dario; Railavo, Juha

    2016-01-15

    Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Real space mapping of oxygen vacancy diffusion and electrochemical transformations by hysteretic current reversal curve measurements

    DOEpatents

    Kalinin, Sergei V.; Balke, Nina; Borisevich, Albina Y.; Jesse, Stephen; Maksymovych, Petro; Kim, Yunseok; Strelcov, Evgheni

    2014-06-10

    An excitation voltage biases an ionic conducting material sample over a nanoscale grid. The bias sweeps a modulated voltage with increasing maximal amplitudes. A current response is measured at grid locations. Current response reversal curves are mapped over maximal amplitudes of the bias cycles. Reversal curves are averaged over the grid for each bias cycle and mapped over maximal bias amplitudes for each bias cycle. Average reversal curve areas are mapped over maximal amplitudes of the bias cycles. Thresholds are determined for onset and ending of electrochemical activity. A predetermined number of bias sweeps may vary in frequency where each sweep has a constant number of cycles and reversal response curves may indicate ionic diffusion kinetics.

  19. Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes

    NASA Astrophysics Data System (ADS)

    Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen

    2016-06-01

    Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.

  20. On shifted Jacobi spectral method for high-order multi-point boundary value problems

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Hafez, R. M.

    2012-10-01

    This paper reports a spectral tau method for numerically solving multi-point boundary value problems (BVPs) of linear high-order ordinary differential equations. The construction of the shifted Jacobi tau approximation is based on conventional differentiation. This use of differentiation allows the imposition of the governing equation at the whole set of grid points and the straight forward implementation of multiple boundary conditions. Extension of the tau method for high-order multi-point BVPs with variable coefficients is treated using the shifted Jacobi Gauss-Lobatto quadrature. Shifted Jacobi collocation method is developed for solving nonlinear high-order multi-point BVPs. The performance of the proposed methods is investigated by considering several examples. Accurate results and high convergence rates are achieved.

  1. Evaluation of two methods for using MR information in PET reconstruction

    NASA Astrophysics Data System (ADS)

    Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.

    2013-02-01

    Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.

  2. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  3. Experimental research and numerical optimisation of multi-point sheet metal forming implementation using a solid elastic cushion system

    NASA Astrophysics Data System (ADS)

    Tolipov, A. A.; Elghawail, A.; Shushing, S.; Pham, D.; Essa, K.

    2017-09-01

    There is a growing demand for flexible manufacturing techniques that meet the rapid changes in customer needs. A finite element analysis numerical optimisation technique was used to optimise the multi-point sheet forming process. Multi-point forming (MPF) is a flexible sheet metal forming technique where the same tool can be readily changed to produce different parts. The process suffers from some geometrical defects such as wrinkling and dimpling, which have been found to be the cause of the major surface quality problems. This study investigated the influence of parameters such as the elastic cushion hardness, blank holder force, coefficient of friction, cushion thickness and radius of curvature, on the quality of parts formed in a flexible multi-point stamping die. For those reasons, in this investigation, a multipoint forming stamping process using a blank holder was carried out in order to study the effects of the wrinkling, dimpling, thickness variation and forming force. The aim was to determine the optimum values of these parameters. Finite element modelling (FEM) was employed to simulate the multi-point forming of hemispherical shapes. Using the response surface method, the effects of process parameters on wrinkling, maximum deviation from the target shape and thickness variation were investigated. The results show that elastic cushion with proper thickness and polyurethane with the hardness of Shore A90. It has also been found that the application of lubrication cans improve the shape accuracy of the formed workpiece. These final results were compared with the numerical simulation results of the multi-point forming for hemispherical shapes using a blank-holder and it was found that using cushion hardness realistic to reduce wrinkling and maximum deviation.

  4. Linkage analysis of primary open-angle glaucoma excludes the juvenile glaucoma region on chromosome 1q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wirtz, M.K.; Acott, T.S.; Samples, J.R.

    1994-09-01

    The gene for one form of juvenile glaucoma has been mapped to chromosome 1q21-q31. This raises the possibility of primary open-angle glaucoma (POAG) also mapping to this region if the same defective gene causes both diseases. To ask this question linkage analysis was performed on a large POAG kindred. Blood samples or skin biopsies were obtained from 40 members of this family. Individuals were diagnosed as having POAG if they met two or more of the following criteria: (1) Visual field defects compatible with glaucoma on automated perimetry; (2) Optic nerve head and/or nerve fiber layer analysis compatible with glaucomatousmore » damage; (3) high intraocular pressures (> 20 mm Hg). Patients were considered glaucoma suspects if they only met one criterion. These individuals were excluded from the analysis. Of the 40 members, seven were diagnosed with POAG; four were termed suspects. The earliest age of onset was 38 years old, while the average age of onset was 65 years old. We performed two-point and multipoint linkage analysis, using five markers which encompass the region 1q21-q31; specifically, D1S194, D1S210, D1S212, D1S191 and LAMB2. Two-point lod scores excluded tight linkage with all markers except D1S212 (maximum lod score of 1.07 at theta = 0.0). In the multipoint analysis, including D1S210-D1S212-LAMB2 and POAG, the entire 11 cM region spanned by these markers was excluded for linkage with POAG; that is, lod scores were < -2.0. In conclusion, POAG in this family does not map to chromosome 1q21-q31 and, thus, they carry a gene that is distinct from the juvenile glaucoma gene.« less

  5. Global Magnetosphere Evolution During 22 June 2015 Geomagnetic Storm as Seen From Multipoint Observations and Comparison With MHD-Ring Rurrent Model

    NASA Astrophysics Data System (ADS)

    Buzulukova, N.; Moore, T. E.; Dorelli, J.; Fok, M. C. H.; Sibeck, D. G.; Angelopoulos, V.; Goldstein, J.; Valek, P. W.; McComas, D. J.

    2015-12-01

    On 22-23 June 2015 a severe geomagnetic storm occurred with Dst minimum of approximately -200nT. During this extreme event, multipoint observations of magnetospheric dynamics were obtained by a fleet of Geospace spacecraft including MMS, TWINS, Van-Allen and THEMIS. We present analysis of satellite data during that event, and use a global coupled MHD-ring current model (BATSRUS-CRCM) to connect multipoint observations from different parts of the magnetosphere. The analysis helps to identify different magnetospheric domains from multipoint measurements and various magnetospheric boundary motions. We will explore how the initial disturbance from the solar wind propagates through the magnetosphere causing energization of plasma in the inner magnetosphere and producing an extreme geomagnetic storm.

  6. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    PubMed

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.

  7. Further refinement of the location for autosomal dominant retinitis pigmentosa on chromosome 7p (RP9)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inglehearn, C.F.; Keen, T.J.; Al-Maghtheh, M.

    1994-04-01

    A form of autosomal dominant retinitis pigmentosa (adRP) mapping to chromosome 7p was recently reported by this laboratory, in a single large family from southeastern England. Further sampling of the family and the use a number of genetic markers from 7p have facilitated the construction of a series of multipoint linkage maps of the region with the most likely disease gene location. From this and haplotype data, the locus can now be placed between the markers D7S484 and D7S526, in an interval estimated to be 1.6-4 cM. Genetic distances between the markers previously reported to be linked to this regionmore » and those described in the recent whole-genome poly-CA map were estimated from data in this and other families. These data should assist in the construction of a physical map of the region and will help to identify candidate genes for the 7p adRP locus. 21 refs., 3 figs., 1 tab.« less

  8. Impacts of Maximizing Tendencies on Experience-Based Decisions.

    PubMed

    Rim, Hye Bin

    2017-06-01

    Previous research on risky decisions has suggested that people tend to make different choices depending on whether they acquire the information from personally repeated experiences or from statistical summary descriptions. This phenomenon, called as a description-experience gap, was expected to be moderated by the individual difference in maximizing tendencies, a desire towards maximizing decisional outcome. Specifically, it was hypothesized that maximizers' willingness to engage in extensive information searching would lead maximizers to make experience-based decisions as payoff distributions were given explicitly. A total of 262 participants completed four decision problems. Results showed that maximizers, compared to non-maximizers, drew more samples before making a choice but reported lower confidence levels on both the accuracy of knowledge gained from experiences and the likelihood of satisfactory outcomes. Additionally, maximizers exhibited smaller description-experience gaps than non-maximizers as expected. The implications of the findings and unanswered questions for future research were discussed.

  9. Breakthrough Science Enabled by Smallsat Optical Communication

    NASA Astrophysics Data System (ADS)

    Gorjian, V.

    2017-12-01

    The recent NRC panel on "Achieving Science with Cubesats" found that "CubeSats have already proven themselves to be an important scientific tool. CubeSats can produce high-value science, as demonstrated by peer-reviewed publications that address decadal survey science goals." While some science is purely related to the size of the collecting aperture, there are plentiful examples of new and exciting experiments that can be achieved using the relatively inexpensive Cubesat platforms. We will present various potential science applications that can benefit from higher bandwidth communication. For example, on or near Earth orbit, Cubesats could provide hyperspectral imaging, gravity field mapping, atmospheric probing, and terrain mapping. These can be achieved either as large constellations of Cubesats or a few Cubesats that provide multi-point observations. Away from the Earth (up to 1AU) astrophysical variability studies, detections of solar particles between the Earth and Venus, mapping near earth objects, and high-speed videos of the Sun will also be enabled by high bandwidth communications.

  10. Benign Familial Infantile Convulsions: Mapping of a Novel Locus on Chromosome 2q24 and Evidence for Genetic Heterogeneity

    PubMed Central

    Malacarne, Michela; Gennaro, Elena; Madia, Francesca; Pozzi, Sarah; Vacca, Daniela; Barone, Baldassare; Bernardina, Bernardo dalla; Bianchi, Amedeo; Bonanni, Paolo; De Marco, Pasquale; Gambardella, Antonio; Giordano, Lucio; Lispi, Maria Luisa; Romeo, Antonino; Santorum, Enrica; Vanadia, Francesca; Vecchi, Marilena; Veggiotti, Pierangelo; Vigevano, Federico; Viri, Franco; Bricarelli, Franca Dagna; Zara, Federico

    2001-01-01

    In 1997, a locus for benign familial infantile convulsions (BFIC) was mapped to chromosome 19q. Further data suggested that this locus is not involved in all families with BFIC. In the present report, we studied eight Italian families and mapped a novel BFIC locus within a 0.7-cM interval of chromosome 2q24, between markers D2S399 and D2S2330. A maximum multipoint HLOD score of 6.29 was obtained under the hypothesis of genetic heterogeneity. Furthermore, the clustering of chromosome 2q24–linked families in southern Italy may indicate a recent founder effect. In our series, 40% of the families are linked to neither chromosome 19q or 2q loci, suggesting that at least three loci are involved in BFIC. This finding is consistent with other autosomal dominant idiopathic epilepsies in which different genes were found to be implicated. PMID:11326335

  11. The Juberg-Marsidi syndrome maps to the proximal long arm of the X chromosome (Xq12-q21)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saugier-Veber, P.; Abadie, V.; Turleau, C.

    Juberg-Marsidi syndrome (McKusick 309590) is a rare X-linked recessive condition characterized by severe mental retardation, growth failure, sensorineural deafness, and microgenitalism. Here the authors report on the genetic mapping of the Juberg-Marsidi gene to the proximal long arm of the X chromosome (Xq12-q21) by linkage to probe pRX214H1 at the DXS441 locus (Z = 3.24 at [theta] = .00). Multipoint linkage analysis placed the Juberg-Marsidi gene within the interval defined by the DXS159 and the DXYS1X loci in the Xq12-q21 region. These data provide evidence for the genetic distinction between Juberg-Marsidi syndrome and several other X-linked mental retardation syndromes thatmore » have hypogonadism and hypogenitalism and that have been localized previously. Finally, the mapping of the Juberg-Marsidi gene is of potential interest for reliable genetic counseling of at-risk women. 25 refs., 2 figs., 3 tabs.« less

  12. The gene for autosomal dominant cerebellar ataxia with pigmentary macular dystrophy maps to chromosome 3p12-p21.1.

    PubMed

    Benomar, A; Krols, L; Stevanin, G; Cancel, G; LeGuern, E; David, G; Ouhabi, H; Martin, J J; Dürr, A; Zaim, A

    1995-05-01

    Autosomal dominant cerebellar ataxia with pigmentary macular dystrophy (ADCA type II) is a rare neurodegenerative disorder with marked anticipation. We have mapped the ADCA type II locus to chromosome 3 by linkage analysis in a genome-wide search and found no evidence for genetic heterogeneity among four families of different geographic origins. Haplotype reconstruction initially restricted the locus to the 33 cM interval flanked by D3S1300 and D3S1276 located at 3p12-p21.1. Combined multipoint analysis, using the Zmax-1 method, further reduced the candidate interval to an 8 cM region around D3S1285. Our results show that ADCA type II is a genetically homogenous disorder, independent of the heterogeneous group of type I cerebellar ataxias.

  13. A novel autosomal recessive non-syndromic hearing impairment locus (DFNB47) maps to chromosome 2p25.1-p24.3.

    PubMed

    Hassan, Muhammad Jawad; Santos, Regie Lyn P; Rafiq, Muhammad Arshad; Chahrour, Maria H; Pham, Thanh L; Wajid, Muhammad; Hijab, Nadine; Wambangco, Michael; Lee, Kwanghyuk; Ansar, Muhammad; Yan, Kai; Ahmad, Wasim; Leal, Suzanne M

    2006-01-01

    Hereditary hearing impairment (HI) displays extensive genetic heterogeneity. Autosomal recessive (AR) forms of prelingual HI account for approximately 75% of cases with a genetic etiology. A novel AR non-syndromic HI locus (DFNB47) was mapped to chromosome 2p25.1-p24.3, in two distantly related Pakistani kindreds. Genome scan and fine mapping were carried out using microsatellite markers. Multipoint linkage analysis resulted in a maximum LOD score of 4.7 at markers D2S1400 and D2S262. The three-unit support interval was bounded by D2S330 and D2S131. The region of homozygosity was found within the three-unit support interval and flanked by markers D2S2952 and D2S131, which corresponds to 13.2 cM according to the Rutgers combined linkage-physical map. This region contains 5.3 Mb according to the sequence-based physical map. Three candidate genes, KCNF1, ID2 and ATP6V1C2 were sequenced, and were found to be negative for functional sequence variants.

  14. A novel autosomal recessive non-syndromic hearing impairment locus (DFNB47) maps to chromosome 2p25.1-p24.3

    PubMed Central

    Hassan, Muhammad Jawad; Santos, Regie Lyn P.; Rafiq, Muhammad Arshad; Chahrour, Maria H.; Pham, Thanh L.; Wajid, Muhammad; Hijab, Nadine; Wambangco, Michael; Lee, Kwanghyuk; Ansar, Muhammad; Yan, Kai; Ahmad, Wasim; Leal, Suzanne M.

    2010-01-01

    Hereditary hearing impairment (HI) displays extensive genetic heterogeneity. Autosomal recessive (AR) forms of prelingual HI account for ~75% of cases with a genetic etiology. A novel AR non-syndromic HI locus (DFNB47) was mapped to chromosome 2p25.1-p24.3, in two distantly related Pakistani kindreds. Genome scan and fine mapping were carried out using microsatellite markers. Multipoint linkage analysis resulted in a maximum LOD score of 4.7 at markers D2S1400 and D2S262. The three-unit support interval was bounded by D2S330 and D2S131. The region of homozygosity was found within the three-unit support interval and flanked by markers D2S2952 and D2S131, which corresponds to 13.2 cM according to the Rutgers combined linkage-physical map. This region contains 5.3 Mb according to the sequence-based physical map. Three candidate genes, KCNF1, ID2 and ATP6V1C2 were sequenced, and were found to be negative for functional sequence variants. PMID:16261342

  15. Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.

    PubMed

    Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos

    2011-07-01

    In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Understanding Turbulence using Active and Passive Multipoint Measurements in Laboratory Magnetospheres

    NASA Astrophysics Data System (ADS)

    Mauel, M. E.; Abler, M. C.; Qian, T. M.; Saperstein, A.; Yan, J. R.

    2017-10-01

    In a laboratory magnetosphere, plasma is confined by a strong dipole magnet, and interchange and entropy mode turbulence can be studied and controlled in near steady-state conditions. Turbulence is dominated by long wavelength modes exhibiting chaotic dynamics, intermitency, and an inverse spectral cascade. Here, we summarize recent results: (i) high-resolution measurement of the frequency-wavenumber power spectrum using Capon's ``maximum likelihood method'', and (ii) direct measurement of the nonlinear coupling of interchange/entropy modes in a turbulent plasma through driven current injection at multiple locations and frequencies. These observations well-characterize plasma turbulence over a broad band of wavelengths and frequencies. Finally, we also discuss the application of these techniques to space-based experiments and observations aimed to reveal the nature of heliospheric and magnetospheric plasma turbulence. Supported by NSF-DOE Partnership in Plasma Science Grant DE-FG02-00ER54585.

  17. 47 CFR 1.824 - Random selection procedures for Multichannel Multipoint Distribution Service and Multipoint...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... an ownership interest of more than 50 percent in the media of mass communication whose service areas... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Complaints, Applications, Tariffs...

  18. Genetic mapping in the presence of genotyping errors.

    PubMed

    Cartwright, Dustin A; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander

    2007-08-01

    Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders.

  19. Genetic Mapping in the Presence of Genotyping Errors

    PubMed Central

    Cartwright, Dustin A.; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander

    2007-01-01

    Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders. PMID:17277374

  20. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  1. Multipoint fiber-optic laser-ultrasonic actuator based on fiber core-opened tapers.

    PubMed

    Tian, Jiajun; Dong, Xiaolong; Gao, Shimin; Yao, Yong

    2017-11-27

    In this study, a novel fiber-optic, multipoint, laser-ultrasonic actuator based on fiber core-opened tapers (COTs) is proposed and demonstrated. The COTs were fabricated by splicing single-mode fibers using a standard fiber splicer. A COT can effectively couple part of a core mode into cladding modes, and the coupling ratio can be controlled by adjusting the taper length. Such characteristics are used to obtain a multipoint, laser-ultrasonic actuator with balanced signal strength by reasonably controlling the taper lengths of the COTs. As a prototype, we constructed an actuator that generated ultrasound at four points with a balanced ultrasonic strength by connecting four COTs with coupling ratios of 24.5%, 33.01%, 49.51%, and 87.8% in a fiber link. This simple-to-fabricate, multipoint, laser-ultrasonic actuator with balanced ultrasound signal strength has potential applications in fiber-optic ultrasound testing technology.

  2. Mapping of quantitative trait loci using the skew-normal distribution.

    PubMed

    Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos

    2007-11-01

    In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.

  3. Approximated mutual information training for speech recognition using myoelectric signals.

    PubMed

    Guo, Hua J; Chan, A D C

    2006-01-01

    A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.

  4. Recovery of chemical Estimates by Field Inhomogeneity Neighborhood Error Detection (REFINED): Fat/Water Separation at 7T

    PubMed Central

    Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.

    2012-01-01

    I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815

  5. Recovery of chemical estimates by field inhomogeneity neighborhood error detection (REFINED): fat/water separation at 7 tesla.

    PubMed

    Narayan, Sreenath; Kalhan, Satish C; Wilson, David L

    2013-05-01

    To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.

  6. Familial isolated hyperparathyroidism is linked to a 1.7 Mb region on chromosome 2p13.3–14

    PubMed Central

    Warner, J; Nyholt, D R; Busfield, F; Epstein, M; Burgess, J; Stranks, S; Hill, P; Perry‐Keene, D; Learoyd, D; Robinson, B; Teh, B T; Prins, J B; Cardinal, J W

    2006-01-01

    Bachground Familial isolated hyperparathyroidism (FIHP) is an autosomal dominantly inherited form of primary hyperparathyroidism. Although comprising only about 1% of cases of primary hyperparathyroidism, identification and functional analysis of a causative gene for FIHP is likely to advance our understanding of parathyroid physiology and pathophysiology. Methods A genome‐wide screen of DNA from seven pedigrees with FIHP was undertaken in order to identify a region of genetic linkage with the disorder. Results Multipoint linkage analysis identified a region of suggestive linkage (LOD score 2.68) on chromosome 2. Fine mapping with the addition of three other families revealed significant linkage adjacent to D2S2368 (maximum multipoint LOD score 3.43). Recombination events defined a 1.7 Mb region of linkage between D2S2368 and D2S358 in nine pedigrees. Sequencing of the two most likely candidate genes in this region, however, did not identify a gene for FIHP. Conclusions We conclude that a causative gene for FIHP lies within this interval on chromosome 2. This is a major step towards eventual precise identification of a gene for FIHP, likely to be a key component in the genetic regulation of calcium homeostasis. PMID:16525030

  7. Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms

    NASA Astrophysics Data System (ADS)

    Samanta, A.; Todd, L. A.

    A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.

  8. Averaged kick maps: less noise, more signal…and probably less bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; Afonine, Pavel V.; Gunčar, Gregor

    2009-09-01

    Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation. Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, theymore » are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σ{sub A}) weighted. Analysis shows that they are comparable and correspond better to the final model than σ{sub A} and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.« less

  9. Modeling Adversaries in Counterterrorism Decisions Using Prospect Theory.

    PubMed

    Merrick, Jason R W; Leclerc, Philip

    2016-04-01

    Counterterrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. However, each of these approaches assumes that the attacker is maximizing its utility. Experimental research shows that human beings do not make decisions by maximizing expected utility without aid, but instead deviate in specific ways such as loss aversion or likelihood insensitivity. In this article, we modify existing methods for counterterrorism decisions. We keep expected utility as the defender's paradigm to seek for the rational decision, but we use prospect theory to solve for the attacker's decision to descriptively model the attacker's loss aversion and likelihood insensitivity. We study the effects of this approach in a critical decision, whether to screen containers entering the United States for radioactive materials. We find that the defender's optimal decision is sensitive to the attacker's levels of loss aversion and likelihood insensitivity, meaning that understanding such descriptive decision effects is important in making such decisions. © 2014 Society for Risk Analysis.

  10. Comparative Polygenic Analysis of Maximal Ethanol Accumulation Capacity and Tolerance to High Ethanol Levels of Cell Proliferation in Yeast

    PubMed Central

    Pais, Thiago M.; Foulquié-Moreno, María R.; Hubmann, Georg; Duitama, Jorge; Swinnen, Steve; Goovaerts, Annelies; Yang, Yudi; Dumortier, Françoise; Thevelein, Johan M.

    2013-01-01

    The yeast Saccharomyces cerevisiae is able to accumulate ≥17% ethanol (v/v) by fermentation in the absence of cell proliferation. The genetic basis of this unique capacity is unknown. Up to now, all research has focused on tolerance of yeast cell proliferation to high ethanol levels. Comparison of maximal ethanol accumulation capacity and ethanol tolerance of cell proliferation in 68 yeast strains showed a poor correlation, but higher ethanol tolerance of cell proliferation clearly increased the likelihood of superior maximal ethanol accumulation capacity. We have applied pooled-segregant whole-genome sequence analysis to identify the polygenic basis of these two complex traits using segregants from a cross of a haploid derivative of the sake strain CBS1585 and the lab strain BY. From a total of 301 segregants, 22 superior segregants accumulating ≥17% ethanol in small-scale fermentations and 32 superior segregants growing in the presence of 18% ethanol, were separately pooled and sequenced. Plotting SNP variant frequency against chromosomal position revealed eleven and eight Quantitative Trait Loci (QTLs) for the two traits, respectively, and showed that the genetic basis of the two traits is partially different. Fine-mapping and Reciprocal Hemizygosity Analysis identified ADE1, URA3, and KIN3, encoding a protein kinase involved in DNA damage repair, as specific causative genes for maximal ethanol accumulation capacity. These genes, as well as the previously identified MKT1 gene, were not linked in this genetic background to tolerance of cell proliferation to high ethanol levels. The superior KIN3 allele contained two SNPs, which are absent in all yeast strains sequenced up to now. This work provides the first insight in the genetic basis of maximal ethanol accumulation capacity in yeast and reveals for the first time the importance of DNA damage repair in yeast ethanol tolerance. PMID:23754966

  11. Multi-Gigabit Free-Space Optical Data Communication and Network System

    DTIC Science & Technology

    2016-04-01

    IR), Ultraviolet ( UV ), Laser Transceiver, Adaptive Beam Tracking, Electronic Attack (EA), Cyber Attack, Multipoint-to-Multipoint Network, Adaptive...FileName.pptx Free Space Optical Datalink Timeline Phase 1 Point-to-point demonstration 2012 Future Adaptive optic & Quantum Cascade Laser

  12. A novel locus for dilated cardiomyopathy maps to canine chromosome 8.

    PubMed

    Werner, Petra; Raducha, Michael G; Prociuk, Ulana; Sleeper, Meg M; Van Winkle, Thomas J; Henthorn, Paula S

    2008-06-01

    Dilated cardiomyopathy (DCM), the most common form of cardiomyopathy, often leads to heart failure and sudden death. While a substantial proportion of DCMs are inherited, mutations responsible for the majority of DCMs remain unidentified. A genome-wide linkage study was performed to identify the locus responsible for an autosomal recessive inherited form of juvenile DCM (JDCM) in Portuguese water dogs using 16 families segregating the disease. Results link the JDCM locus to canine chromosome 8 with two-point and multipoint lod scores of 10.8 and 14, respectively. The locus maps to a 3.9-Mb region, with complete syntenic homology to human chromosome 14, that contains no genes or loci known to be involved in the development of any type of cardiomyopathy. This discovery of a DCM locus with a previously unknown etiology will provide a new gene to examine in human DCM patients and a model for testing therapeutic approaches for heart failure.

  13. Primary propulsion/large space system interaction study

    NASA Technical Reports Server (NTRS)

    Coyner, J. V.; Dergance, R. H.; Robertson, R. I.; Wiggins, J. V.

    1981-01-01

    An interaction study was conducted between propulsion systems and large space structures to determine the effect of low thrust primary propulsion system characteristics on the mass, area, and orbit transfer characteristics of large space systems (LSS). The LSS which were considered would be deployed from the space shuttle orbiter bay in low Earth orbit, then transferred to geosynchronous equatorial orbit by their own propulsion systems. The types of structures studied were the expandable box truss, hoop and column, and wrap radial rib each with various surface mesh densities. The impact of the acceleration forces on system sizing was determined and the effects of single point, multipoint, and transient thrust applications were examined. Orbit transfer strategies were analyzed to determine the required velocity increment, burn time, trip time, and payload capability over a range of final acceleration levels. Variables considered were number of perigee burns, delivered specific impulse, and constant thrust and constant acceleration modes of propulsion. Propulsion stages were sized for four propellant combinations; oxygen/hydrogen, oxygen/methane, oxygen/kerosene, and nitrogen tetroxide/monomethylhydrazine, for pump fed and pressure fed engine systems. Two types of tankage configurations were evaluated, minimum length to maximize available payload volume and maximum performance to maximize available payload mass.

  14. Empirical characteristics of family-based linkage to a complex trait: the ADIPOQ region and adiponectin levels.

    PubMed

    Hellwege, Jacklyn N; Palmer, Nicholette D; Mark Brown, W; Brown, Mark W; Ziegler, Julie T; Sandy An, S; An, Sandy S; Guo, Xiuqing; Ida Chen, Y-D; Chen, Ida Y-D; Taylor, Kent; Hawkins, Gregory A; Ng, Maggie C Y; Speliotes, Elizabeth K; Lorenzo, Carlos; Norris, Jill M; Rotter, Jerome I; Wagenknecht, Lynne E; Langefeld, Carl D; Bowden, Donald W

    2015-02-01

    We previously identified a low-frequency (1.1 %) coding variant (G45R; rs200573126) in the adiponectin gene (ADIPOQ) which was the basis for a multipoint microsatellite linkage signal (LOD = 8.2) for plasma adiponectin levels in Hispanic families. We have empirically evaluated the ability of data from targeted common variants, exome chip genotyping, and genome-wide association study data to detect linkage and association to adiponectin protein levels at this locus. Simple two-point linkage and association analyses were performed in 88 Hispanic families (1,150 individuals) using 10,958 SNPs on chromosome 3. Approaches were compared for their ability to map the functional variant, G45R, which was strongly linked (two-point LOD = 20.98) and powerfully associated (p value = 8.1 × 10(-50)). Over 450 SNPs within a broad 61 Mb interval around rs200573126 showed nominal evidence of linkage (LOD > 3) but only four other SNPs in this region were associated with p values < 1.0 × 10(-4). When G45R was accounted for, the maximum LOD score across the interval dropped to 4.39 and the best p value was 1.1 × 10(-5). Linked and/or associated variants ranged in frequency (0.0018-0.50) and type (coding, non-coding) and had little detectable linkage disequilibrium with rs200573126 (r (2) < 0.20). In addition, the two-point linkage approach empirically outperformed multipoint microsatellite and multipoint SNP analysis. In the absence of data for rs200573126, family-based linkage analysis using a moderately dense SNP dataset, including both common and low-frequency variants, resulted in stronger evidence for an adiponectin locus than association data alone. Thus, linkage analysis can be a useful tool to facilitate identification of high-impact genetic variants.

  15. Penalized likelihood and multi-objective spatial scans for the detection and inference of irregular clusters

    PubMed Central

    2010-01-01

    Background Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. Results & Discussion We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. Conclusions We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the detection of moderately irregularly shaped clusters. The multi-objective cohesion scan is most effective for the detection of highly irregularly shaped clusters. PMID:21034451

  16. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  17. A quantum framework for likelihood ratios

    NASA Astrophysics Data System (ADS)

    Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.

    The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.

  18. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  19. Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli

    2018-01-01

    In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.

  20. Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps

    NASA Astrophysics Data System (ADS)

    Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.

    2013-06-01

    Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.

  1. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  2. Research on an uplink carrier sense multiple access algorithm of large indoor visible light communication networks based on an optical hard core point process.

    PubMed

    Nan, Zhufen; Chi, Xuefen

    2016-12-20

    The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.

  3. An integrated optimum design approach for high speed prop rotors

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Mccarthy, Thomas R.

    1995-01-01

    The objective is to develop an optimization procedure for high-speed and civil tilt-rotors by coupling all of the necessary disciplines within a closed-loop optimization procedure. Both simplified and comprehensive analysis codes are used for the aerodynamic analyses. The structural properties are calculated using in-house developed algorithms for both isotropic and composite box beam sections. There are four major objectives of this study. (1) Aerodynamic optimization: The effects of blade aerodynamic characteristics on cruise and hover performance of prop-rotor aircraft are investigated using the classical blade element momentum approach with corrections for the high lift capability of rotors/propellers. (2) Coupled aerodynamic/structures optimization: A multilevel hybrid optimization technique is developed for the design of prop-rotor aircraft. The design problem is decomposed into a level for improved aerodynamics with continuous design variables and a level with discrete variables to investigate composite tailoring. The aerodynamic analysis is based on that developed in objective 1 and the structural analysis is performed using an in-house code which models a composite box beam. The results are compared to both a reference rotor and the optimum rotor found in the purely aerodynamic formulation. (3) Multipoint optimization: The multilevel optimization procedure of objective 2 is extended to a multipoint design problem. Hover, cruise, and take-off are the three flight conditions simultaneously maximized. (4) Coupled rotor/wing optimization: Using the comprehensive rotary wing code CAMRAD, an optimization procedure is developed for the coupled rotor/wing performance in high speed tilt-rotor aircraft. The developed procedure contains design variables which define the rotor and wing planforms.

  4. 47 CFR 22.623 - System configuration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false System configuration. 22.623 Section 22.623... Paging and Radiotelephone Service Point-To-Multipoint Operation § 22.623 System configuration. This section requires a minimum configuration for point-to-multipoint systems using the channels listed in § 22...

  5. 47 CFR 22.623 - System configuration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false System configuration. 22.623 Section 22.623... Paging and Radiotelephone Service Point-To-Multipoint Operation § 22.623 System configuration. This section requires a minimum configuration for point-to-multipoint systems using the channels listed in § 22...

  6. 47 CFR 22.623 - System configuration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false System configuration. 22.623 Section 22.623... Paging and Radiotelephone Service Point-To-Multipoint Operation § 22.623 System configuration. This section requires a minimum configuration for point-to-multipoint systems using the channels listed in § 22...

  7. Public Data Set: Control and Automation of the Pegasus Multi-point Thomson Scattering System

    DOE Data Explorer

    Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Rodriguez Sanchez, Cuauhtemoc [University of Wisconsin-Madison] (ORCID:0000000334712586); Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448)

    2016-08-12

    This public data set contains openly-documented, machine readable digital research data corresponding to figures published in G.M. Bodner et al., 'Control and Automation of the Pegasus Multi-point Thomson Scattering System,' Rev. Sci. Instrum. 87, 11E523 (2016).

  8. 47 CFR 22.623 - System configuration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false System configuration. 22.623 Section 22.623... Paging and Radiotelephone Service Point-To-Multipoint Operation § 22.623 System configuration. This section requires a minimum configuration for point-to-multipoint systems using the channels listed in § 22...

  9. Learning GIS and exploring geolocated data with the all-in-one Geolokit toolbox for Google Earth

    NASA Astrophysics Data System (ADS)

    Watlet, A.; Triantafyllou, A.; Bastin, C.

    2016-12-01

    GIS software are today's essential tools to gather and visualize geological data, to apply spatial and temporal analysis and finally, to create and share interactive maps for further investigations in geosciences. Such skills are especially essential to learn for students who go through fieldtrips, samples collections or field experiments. However, time is generally missing to teach in detail all the aspects of visualizing geolocated geoscientific data. For these purposes, we developed Geolokit: a lightweight freeware dedicated to geodata visualization and written in Python, a high-level, cross-platform programming language. Geolokit software is accessible through a graphical user interface, designed to run in parallel with Google Earth, benefitting from the numerous interactive capabilities. It is designed as a very user-friendly toolbox that allows `geo-users' to import their raw data (e.g. GPS, sample locations, structural data, field pictures, maps), to use fast data analysis tools and to visualize these into the Google Earth environment using KML code; with no require of third party software, except Google Earth itself. Geolokit comes with a large number of geosciences labels, symbols, colours and placemarks and is applicable to display several types of geolocated data, including: Multi-points datasets Automatically computed contours of multi-points datasets via several interpolation methods Discrete planar and linear structural geology data in 2D or 3D supporting large range of structures input format Clustered stereonets and rose diagrams 2D cross-sections as vertical sections Georeferenced maps and grids with user defined coordinates Field pictures using either geo-tracking metadata from a camera built-in GPS module, or the same-day track of an external GPS In the end, Geolokit is helpful for quickly visualizing and exploring data without losing too much time in the numerous capabilities of GIS software suites. We are looking for students and teachers to discover all the functionalities of Geolokit. As this project is under development and planned to be open source, we are definitely looking to discussions regarding particular needs or ideas, and to contributions in the Geolokit project.

  10. Mapping grass communities based on multi-temporal Landsat TM imagery and environmental variables

    NASA Astrophysics Data System (ADS)

    Zeng, Yuandi; Liu, Yanfang; Liu, Yaolin; de Leeuw, Jan

    2007-06-01

    Information on the spatial distribution of grass communities in wetland is increasingly recognized as important for effective wetland management and biological conservation. Remote sensing techniques has been proved to be an effective alternative to intensive and costly ground surveys for mapping grass community. However, the mapping accuracy of grass communities in wetland is still not preferable. The aim of this paper is to develop an effective method to map grass communities in Poyang Lake Natural Reserve. Through statistic analysis, elevation is selected as an environmental variable for its high relationship with the distribution of grass communities; NDVI stacked from images of different months was used to generate Carex community map; the image in October was used to discriminate Miscanthus and Cynodon communities. Classifications were firstly performed with maximum likelihood classifier using single date satellite image with and without elevation; then layered classifications were performed using multi-temporal satellite imagery and elevation with maximum likelihood classifier, decision tree and artificial neural network separately. The results show that environmental variables can improve the mapping accuracy; and the classification with multitemporal imagery and elevation is significantly better than that with single date image and elevation (p=0.001). Besides, maximum likelihood (a=92.71%, k=0.90) and artificial neural network (a=94.79%, k=0.93) perform significantly better than decision tree (a=86.46%, k=0.83).

  11. On computation of p-values in parametric linkage analysis.

    PubMed

    Kurbasic, Azra; Hössjer, Ola

    2004-01-01

    Parametric linkage analysis is usually used to find chromosomal regions linked to a disease (phenotype) that is described with a specific genetic model. This is done by investigating the relations between the disease and genetic markers, that is, well-characterized loci of known position with a clear Mendelian mode of inheritance. Assume we have found an interesting region on a chromosome that we suspect is linked to the disease. Then we want to test the hypothesis of no linkage versus the alternative one of linkage. As a measure we use the maximal lod score Z(max). It is well known that the maximal lod score has asymptotically a (2 ln 10)(-1) x (1/2 chi2(0) + 1/2 chi2(1)) distribution under the null hypothesis of no linkage when only one point (one marker) on the chromosome is studied. In this paper, we show, both by simulations and theoretical arguments, that the null hypothesis distribution of Zmax has no simple form when more than one marker is used (multipoint analysis). In fact, the distribution of Zmax depends on the number of families, their structure, the assumed genetic model, marker denseness, and marker informativity. This means that a constant critical limit of Zmax leads to tests associated with different significance levels. Because of the above-mentioned problems, from the statistical point of view the maximal lod score should be supplemented by a p-value when results are reported. Copyright (c) 2004 S. Karger AG, Basel.

  12. Optimal Control Problems with Switching Points. Ph.D. Thesis, 1990 Final Report

    NASA Technical Reports Server (NTRS)

    Seywald, Hans

    1991-01-01

    The main idea of this report is to give an overview of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.

  13. Dynamic Cerebral Autoregulation Is Acutely Impaired during Maximal Apnoea in Trained Divers

    PubMed Central

    Cross, Troy J.; Kavanagh, Justin J.; Breskovic, Toni; Johnson, Bruce D.; Dujic, Zeljko

    2014-01-01

    Aims To examine whether dynamic cerebral autoregulation is acutely impaired during maximal voluntary apnoea in trained divers. Methods Mean arterial pressure (MAP), cerebral blood flow-velocity (CBFV) and end-tidal partial pressures of O2 and CO2 (PETO2 and PETCO2) were measured in eleven trained, male apnoea divers (28±2 yr; 182±2 cm, 76±7 kg) during maximal “dry” breath holding. Dynamic cerebral autoregulation was assessed by determining the strength of phase synchronisation between MAP and CBFV during maximal apnoea. Results The strength of phase synchronisation between MAP and CBFV increased from rest until the end of maximal voluntary apnoea (P<0.05), suggesting that dynamic cerebral autoregulation had weakened by the apnoea breakpoint. The magnitude of impairment in dynamic cerebral autoregulation was strongly, and positively related to the rise in PETCO2 observed during maximal breath holding (R 2 = 0.67, P<0.05). Interestingly, the impairment in dynamic cerebral autoregulation was not related to the fall in PETO2 induced by apnoea (R 2 = 0.01, P = 0.75). Conclusions This study is the first to report that dynamic cerebral autoregulation is acutely impaired in trained divers performing maximal voluntary apnoea. Furthermore, our data suggest that the impaired autoregulatory response is related to the change in PETCO2, but not PETO2, during maximal apnoea in trained divers. PMID:24498340

  14. Order reduction of z-transfer functions via multipoint Jordan continued-fraction expansion

    NASA Technical Reports Server (NTRS)

    Lee, Ying-Chin; Hwang, Chyi; Shieh, Leang S.

    1992-01-01

    The order reduction problem of z-transfer functions is solved by using the multipoint Jordan continued-fraction expansion (MJCFE) technique. An efficient algorithm that does not require the use of complex algebra is presented for obtaining an MJCFE from a stable z-transfer function with expansion points selected from the unit circle and/or the positive real axis of the z-plane. The reduced-order models are exactly the multipoint Pade approximants of the original system and, therefore, they match the (weighted) time-moments of the impulse response and preserve the frequency responses of the system at some characteristic frequencies, such as gain crossover frequency, phase crossover frequency, bandwidth, etc.

  15. Rogue waves in terms of multi-point statistics and nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Hadjihosseini, Ali; Lind, Pedro; Mori, Nobuhito; Hoffmann, Norbert P.; Peinke, Joachim

    2017-04-01

    Ocean waves, which lead to rogue waves, are investigated on the background of complex systems. In contrast to deterministic approaches based on the nonlinear Schroedinger equation or focusing effects, we analyze this system in terms of a noisy stochastic system. In particular we present a statistical method that maps the complexity of multi-point data into the statistics of hierarchically ordered height increments for different time scales. We show that the stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. This stochastic description enables us to show several new aspects of wave states. Surrogate data sets can in turn be generated allowing to work out different statistical features of the complex sea state in general and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics. As a new outlook the ocean wave states will be considered in terms of nonequilibrium thermodynamics, for which the entropy production of different wave heights will be considered. We show evidence that rogue waves are characterized by negative entropy production. The statistics of the entropy production can be used to distinguish different wave states.

  16. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  17. Full potential methods for analysis/design of complex aerospace configurations

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Szema, Kuo-Yen; Bonner, Ellwood

    1986-01-01

    The steady form of the full potential equation, in conservative form, is employed to analyze and design a wide variety of complex aerodynamic shapes. The nonlinear method is based on the theory of characteristic signal propagation coupled with novel flux biasing concepts and body-fitted mapping procedures. The resulting codes are vectorized for the CRAY XMP and the VPS-32 supercomputers. Use of the full potential nonlinear theory is demonstrated for a single-point supersonic wing design and a multipoint design for transonic maneuver/supersonic cruise/maneuver conditions. Achievement of high aerodynamic efficiency through numerical design is verified by wind tunnel tests. Other studies reported include analyses of a canard/wing/nacelle fighter geometry.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellsten, E.; Vesa, J.; Peltonen, L.

    Infantile neuronal ceroid lipofuscinosis, INCL, CLN1, is an autosomally inherited progressive neuro-generative disorder. The disease results in the massive death of cortical neurons, suggesting an essential role for the CLN1 gene product in the normal neuronal maturation during the first years of life. Identification of new multiallelic markers has now made possible the construction of a refined genetic map encompassing the CLN1 locus at 1p32. Strong allelic association was detected with a new, highly polymorphic HY-TM1 marker. The authors incorporated this observed linkage disequilibrium into multipoint linkage analysis, which significantly increased the informativeness of the limited family material and facilitatedmore » refined assignment of the CLN1 locus. 23 refs., 2 figs., 4 tabs.« less

  19. Multi-Point E-Conferencing with Initial Teacher Training Students in England: Pitfalls and Potential

    ERIC Educational Resources Information Center

    Pratt, Nick

    2008-01-01

    This article reports on attempts to initiate multi-point e-conferencing between English teacher education students on school placements, their host teachers and their university tutors. A sociocultural perspective is adopted in analysing the project, using the metaphor of a "professional knowledge landscape" [Clandinin, D. J., &…

  20. Public Data Set: A Novel, Cost-Effective, Multi-Point Thomson Scattering System on the Pegasus Toroidal Experiment

    DOE Data Explorer

    Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448); Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Rodriguez Sanchez, Cuauhtemoc [University of Wisconsin-Madison] (ORCID:0000000334712586)

    2016-09-16

    This public data set contains openly-documented, machine readable digital research data corresponding to figures published in D.J. Schlossberg et. al., 'A Novel, Cost-Effective, Multi-Point Thomson Scattering System on the Pegasus Toroidal Experiment,' Rev. Sci. Instrum. 87, 11E403 (2016).

  1. Multipoint Multimedia Conferencing System with Group Awareness Support and Remote Management

    ERIC Educational Resources Information Center

    Osawa, Noritaka; Asai, Kikuo

    2008-01-01

    A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…

  2. Improved method to fully compensate the spatial phase nonuniformity of LCoS devices with a Fizeau interferometer.

    PubMed

    Lu, Qiang; Sheng, Lei; Zeng, Fei; Gao, Shijie; Qiao, Yanfeng

    2016-10-01

    Liquid crystal on silicon (LCoS) devices usually show spatial phase nonuniformity (SPNU) in applications of phase modulation, which comprises the phase retardance nonuniformity (PRNU) as a function of the applied voltage and inherent wavefront distortion (WFD) introduced by the device itself. We propose a multipoint calibration method utilizing a Fizeau interferometer to compensate SPNU of the device. Calibration of PRNU is realized by defining a grid of 3×6 cells onto the aperture and then calculating phase retardance of each cell versus a gradient gray pattern. With designing an adjusted gray pattern calculated by the calibrated multipoint phase retardance function, compensation of inherent WFD is achieved. The peak-to-valley (PV) value of the residual WFD compensated by the multipoint calibration method is significantly reduced from 2.5λ to 0.140λ, while the PV value of the residual WFD after global calibration is reduced to 0.364λ. Experimental results of the generated finite-energy 2D Airy beams in Fourier space demonstrate the effectiveness of this multipoint calibration method.

  3. 2pBAb5. Validation of three-dimensional strain tracking by volumetric ultrasound image correlation in a pubovisceral muscle model

    PubMed Central

    Nagle, Anna S.; Nageswaren, Ashok R.; Haridas, Balakrishna; Mast, T. D.

    2014-01-01

    Little is understood about the biomechanical changes leading to pelvic floor disorders such as stress urinary incontinence. In order to measure regional biomechanical properties of the pelvic floor muscles in vivo, a three dimensional (3D) strain tracking technique employing correlation of volumetric ultrasound images has been implemented. In this technique, local 3D displacements are determined as a function of applied stress and then converted to strain maps. To validate this approach, an in vitro model of the pubovisceral muscle, with a hemispherical indenter emulating the downward stress caused by intra-abdominal pressure, was constructed. Volumetric B-scan images were recorded as a function of indenter displacement while muscle strain was measured independently by a sonomicrometry system (Sonometrics). Local strains were computed by ultrasound image correlation and compared with sonomicrometry-measured strains to assess strain tracking accuracy. Image correlation by maximizing an exponential likelihood function was found more reliable than the Pearson correlation coefficient. Strain accuracy was dependent on sizes of the subvolumes used for image correlation, relative to characteristic speckle length scales of the images. Decorrelation of echo signals was mapped as a function of indenter displacement and local tissue orientation. Strain measurement accuracy was weakly related to local echo decorrelation. PMID:24900165

  4. A High Density Consensus Map of Rye (Secale cereale L.) Based on DArT Markers

    PubMed Central

    Myśków, Beata; Stojałowski, Stefan; Heller-Uszyńska, Katarzyna; Góralska, Magdalena; Brągoszewski, Piotr; Uszyński, Grzegorz; Kilian, Andrzej; Rakoczy-Trojanowska, Monika

    2011-01-01

    Background Rye (Secale cereale L.) is an economically important crop, exhibiting unique features such as outstanding resistance to biotic and abiotic stresses and high nutrient use efficiency. This species presents a challenge to geneticists and breeders due to its large genome containing a high proportion of repetitive sequences, self incompatibility, severe inbreeding depression and tissue culture recalcitrance. The genomic resources currently available for rye are underdeveloped in comparison with other crops of similar economic importance. The aim of this study was to create a highly saturated, multilocus linkage map of rye via consensus mapping, based on Diversity Arrays Technology (DArT) markers. Methodology/Principal Findings Recombinant inbred lines (RILs) from 5 populations (564 in total) were genotyped using DArT markers and subjected to linkage analysis using Join Map 4.0 and Multipoint Consensus 2.2 software. A consensus map was constructed using a total of 9703 segregating markers. The average chromosome map length ranged from 199.9 cM (2R) to 251.4 cM (4R) and the average map density was 1.1 cM. The integrated map comprised 4048 loci with the number of markers per chromosome ranging from 454 for 7R to 805 for 4R. In comparison with previously published studies on rye, this represents an eight-fold increase in the number of loci placed on a consensus map and a more than two-fold increase in the number of genetically mapped DArT markers. Conclusions/Significance Through the careful choice of marker type, mapping populations and the use of software packages implementing powerful algorithms for map order optimization, we produced a valuable resource for rye and triticale genomics and breeding, which provides an excellent starting point for more in-depth studies on rye genome organization. PMID:22163026

  5. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  6. Linear maps preserving maximal deviation and the Jordan structure of quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamhalter, Jan

    2012-12-15

    In the algebraic approach to quantum theory, a quantum observable is given by an element of a Jordan algebra and a state of the system is modelled by a normalized positive functional on the underlying algebra. Maximal deviation of a quantum observable is the largest statistical deviation one can obtain in a particular state of the system. The main result of the paper shows that each linear bijective transformation between JBW algebras preserving maximal deviations is formed by a Jordan isomorphism or a minus Jordan isomorphism perturbed by a linear functional multiple of an identity. It shows that only onemore » numerical statistical characteristic has the power to determine the Jordan algebraic structure completely. As a consequence, we obtain that only very special maps can preserve the diameter of the spectra of elements. Nonlinear maps preserving the pseudometric given by maximal deviation are also described. The results generalize hitherto known theorems on preservers of maximal deviation in the case of self-adjoint parts of von Neumann algebras proved by Molnar.« less

  7. Characterization of computer network events through simultaneous feature selection and clustering of intrusion alerts

    NASA Astrophysics Data System (ADS)

    Chen, Siyue; Leung, Henry; Dondo, Maxwell

    2014-05-01

    As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.

  8. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  9. Use of three-point taper systems in timber cruising

    Treesearch

    James W. Flewelling; Richard L. Ernst; Lawrence M. Raynes

    2000-01-01

    Tree volumes and profiles are often estimated as functions of total height and DBH. Alternative estimators include form-class methods, importance sampling, the centroid method, and multi-point profile (taper) estimation systems; all of these require some measurement or estimate of upper stem diameters. The multi-point profile system discussed here allows for upper stem...

  10. Instructor Interaction and Immediacy Behaviors in a Multipoint Videoconferenced Instructional Environment: A Descriptive Case Study

    ERIC Educational Resources Information Center

    Bohnstedt, Kathy D.

    2011-01-01

    The purpose of this study was to examine the experiences of professors teaching in a multi-point videoconferencing instructional environment and how they interacted with students in proximate and remote classrooms. Qualitative and quantitative data were analyzed to gain an understanding of the teaching experience and to examine differences between…

  11. Fluid sample collection and distribution system. [qualitative analysis of aqueous samples from several points

    NASA Technical Reports Server (NTRS)

    Brooks, R. L. (Inventor)

    1979-01-01

    A multipoint fluid sample collection and distribution system is provided wherein the sample inputs are made through one or more of a number of sampling valves to a progressive cavity pump which is not susceptible to damage by large unfiltered particles. The pump output is through a filter unit that can provide a filtered multipoint sample. An unfiltered multipoint sample is also provided. An effluent sample can be taken and applied to a second progressive cavity pump for pumping to a filter unit that can provide one or more filtered effluent samples. The second pump can also provide an unfiltered effluent sample. Means are provided to periodically back flush each filter unit without shutting off the whole system.

  12. Single-Pulse Multi-Point Multi-Component Interferometric Rayleigh Scattering Velocimeter

    NASA Technical Reports Server (NTRS)

    Bivolaru, Daniel; Danehy, Paul M.; Lee, Joseph W.; Gaffney, Richard L., Jr.; Cutler, Andrew D.

    2006-01-01

    A simultaneous multi-point, multi-component velocimeter using interferometric detection of the Doppler shift of Rayleigh, Mie, and Rayleigh-Brillouin scattered light in supersonic flow is described. The system uses up to three sets of collection optics and one beam combiner for the reference laser light to form a single collimated beam. The planar Fabry-Perot interferometer used in the imaging mode for frequency detection preserves the spatial distribution of the signal reasonably well. Single-pulse multi-points measurements of up to two orthogonal and one non-orthogonal components of velocity in a Mach 2 free jet were performed to demonstrate the technique. The average velocity measurements show a close agreement with the CFD calculations using the VULCAN code.

  13. ISINGLASS campaign multi point sensors and data integration

    NASA Astrophysics Data System (ADS)

    Clayton, R.; Lynch, K. A.; Michell, R.; Hampton, D. L.; Samara, M.; Zettergren, M. D.; Hysell, D. L.; Lessard, M.

    2016-12-01

    The upcoming ISINGLASS mission will take place during February 2017 and will consist of 2 rockets launched out of Poker Flat Research Range, Alaska. Each rocket will deploy sensorcraft on the upleg to generate a localized multipoint measurement of the ionospheric plasma environment. Ground based measurements such as the PFISR and SuperDARN radar arrays, CCD cameras making maps of multi-wavelength energy flux and characteristic energy, and Scanning Doppler Imagers for neutral flows, will also be used in conjunction with the in situ rocket measurements. The GEMINI ionospheric model will be used to stitch together all of the various data products during the mission to provide a map of the relevant parameters during the duration of the campaign. The sensors built by Dartmouth for this mission are called Petite Ion Probes (PIPs), collimated RPAs with heritage on the MICA auroral mission. For the upcoming Isinglass flights, PIPs will be assembled into small ejectables, and four of these sensorcraft will be deployed from each of the two rockets on the upleg, creating a localized swarm for the duration of the flight through the F-region ionosphere. During the science portion of the flight, the sensorcraft will be spaced 1km apart from the main payload, which allows for the multipoint measurement of small-scale gradients in the F-region, such as across the edges of arcs. Interpretation of the data from the PIPs is aided by calibration done at Dartmouth in the Elephant plasma chamber. Comparison between the PIPs, and Langmuir and emissive probe measurements, provides verification of the PIP measurements, as well as verifying the field of view of the detector in the various configurations present on the payload. Observational goals for the campaign target a different type of auroral arc with each of the two rockets. The measured response of the thermal ionospheric plasma to different types and scale sizes of auroral precipitation drivers will provide two case studies quantifying the gradient scale lengths of auroral disturbances.

  14. Genome-Wide Linkage and Regional Association Study of Blood Pressure Response to the Cold Pressor Test in Han Chinese: The GenSalt Study

    PubMed Central

    Yang, Xueli; Gu, Dongfeng; He, Jiang; Hixson, James E.; Rao, Dabeeru C.; Lu, Fanghong; Mu, Jianjun; Jaquish, Cashell E.; Chen, Jing; Huang, Jianfeng; Shimmin, Lawrence C.; Rice, Treva K.; Chen, Jichun; Wu, Xigui; Liu, Depei; Kelly, Tanika N.

    2014-01-01

    Background Blood pressure (BP) response to cold pressor test (CPT) is associated with increased risk of cardiovascular disease. We performed a genome-wide linkage scan and regional association analysis to identify genetic determinants of BP response to CPT. Methods and Results A total of 1,961 Chinese participants completed the CPT. Multipoint quantitative trait linkage analysis was performed, followed by single-marker and gene-based analyses of variants in promising linkage regions (logarithm of odds, LOD ≥ 2). A suggestive linkage signal was identified for systolic BP (SBP) response to CPT at 20p13-20p12.3, with a maximum multipoint LOD score of 2.37. Based on regional association analysis with 1,351 SNPs in the linkage region, we found that marker rs2326373 at 20p13 was significantly associated with mean arterial pressure (MAP) responses to CPT (P = 8.8×10−6) after FDR adjustment for multiple comparisons. A similar trend was also observed for SBP response (P = 0.03) and DBP response (P = 4.6×10−5). Results of gene-based analyses showed that variants in genes MCM8 and SLC23A2 were associated with SBP response to CPT (P = 4.0×10−5 and 2.7×10−4, respectively), and variants in genes MCM8 and STK35 were associated with MAP response to CPT (P = 1.5×10−5 and 5.0×10−5, respectively). Conclusions Within a suggestive linkage region on chromosome 20, we identified a novel variant associated with BP responses to CPT. We also found gene-based associations of MCM8, SLC23A2 and STK35 in this region. Further work is warranted to confirm these findings. Clinical Trial Registration URL: http://www.clinicaltrials.gov; Unique identifier: NCT00721721. PMID:25028485

  15. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  16. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  17. A Novel Locus For Dilated Cardiomyopathy Maps to Canine Chromosome 8

    PubMed Central

    Werner, Petra; Raducha, Michael G.; Prociuk, Ulana; Sleeper, Meg M.; Henthorn, Paula S.

    2008-01-01

    Dilated cardiomyopathy (DCM), the most common form of cardiomyopathy, often leads to heart failure and sudden death. While a substantial proportion of DCMs are inherited, mutations responsible for the majority of DCMs remain unidentified. A genome-wide linkage study was performed to identify the locus responsible for an autosomal recessive inherited form of juvenile DCM (JDCM) in Portuguese water dogs using 16 families segregating the disease. Results link the JDCM locus to canine chromosome 8 with two-point and multipoint LOD scores of 10.8 and 14, respectively. The locus maps to a 3.9 Mb region, with complete syntenic homology to human chromosome 14, that contains no genes or loci known to be involved in the development of any type of cardiomyopathy. This discovery of a DCM locus with a previously unknown etiology will provide a new gene to examine in human DCM patients and a model for testing therapeutic approaches for heart failure. PMID:18442891

  18. A candidate region for Nevoid Basal Cell Carcinoma Syndrome defined by genetic and physical mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wainwright, B.; Negus, K.; Berkman, J.

    1994-09-01

    Nevoid Basal Cell Carcinoma Syndrome (NBCCS, or Gorlin`s syndrome) is a cancer predisposition syndrome charcterized by multiple basal cell carcinomas (BCCs) and diverse developmental defects. The gene responsible for NBCCS, which is most likely to be a tumor suppressor gene, has previously been mapped to 9q22.3-q31 in a 12 cM interval between the microsatellite marker loci D9S12 and D9S109. Combined multipoint and haplotype analyses of Australian pedigrees has further refined the localization to a 2 cM interval between markers D9S196 and D9S180. Our loss of heterozygosity (LOH) studies from sporadic (n= 58) and familial (n=41) BCCs indicate that 50% havemore » deletions within the NBCCS candidate region. All LOH is consistent with the genetic mapping of the NBCCS locus. Additionally, one sporadic tumor indicates that the smallest region of overlap in the deletions is within the interval D9S287 (proximal) and D9S180 (distal). A series of YAC clones from within this region has been mapped by FISH to examine chimerism. These clones, which have been mapped with respect to one another, form a contig which encompasses the candidate region from D9S196 to D9S180.« less

  19. The gene for spinal cerebellar ataxia 3 (SCA3) is located in a region of approximately 3 cM on chromosome 14q24.3-q32.2.

    PubMed Central

    Stevanin, G; Cancel, G; Dürr, A; Chneiweiss, H; Dubourg, O; Weissenbach, J; Cann, H M; Agid, Y; Brice, A

    1995-01-01

    SCA3, the gene for spinal cerebellar ataxia 3, was recently mapped to a 15-cM interval between D14S67 and D14S81 on chromosome 14q, by linkage analysis in two families of French ancestry. The SCA3 candidate region has now been refined by linkage analysis with four new microsatellite markers (D14S256, D14S291, D14S280, and AFM343vf1) in the same two families, in which 19 additional individuals were genotyped, and in a third French family. Combined two-point linkage analyses show that the new markers, D14S280 and AFM343vf1, are tightly linked to the SCA3 locus, with maximal lod scores, at recombination fraction, (theta) = .00, of 7.05 and 13.70, respectively. Combined multipoint and recombinant haplotype analyses localize the SCA3 locus to a 3-cM interval flanked by D14S291 and D14S81. The same allele for D14S280 segregates with the disease locus in the three kindreds. This allele is frequent in the French population, however, and linkage disequilibrium is not clearly established. The SCA3 locus remains within the 29-cM region on 14q24.3-q32.2 containing the gene for the Machado-Joseph disease, which is clinically related to the phenotype determined by SCA3, but it cannot yet be concluded that both diseases result from alterations of the same gene. PMID:7825578

  20. Evidence of linkage disequilibrium between schizophrenia and the SCA1 CAG repeat on chromosome 6p23

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, S.; Sun, Cui-E; Diehl, S.R.

    Schizophrenia and the closely related phenotype schizoaffective disorder are severe mental illnesses that affect >1.0% of the population. The major role that genetic factors contribute to disease susceptibility is very well established. Schizophrenia appears to be a highly complex and heterogeneous disorder, however, and gene-mapping efforts also face challenges in assigning diagnoses and in delineating the disease`s phenotypic boundary. Several recently reported studies indicate that a schizophrenia-susceptibility gene may reside on the distal short arm of chromosome 6. Wang et al. reported a strong suggestion of linkage to chromosome 6pter-p22, using a resource based on 186 Irish families that wasmore » developed by K. S, Kendler, D. Walsh and colleagues, and one of us (S.R.D.), as described elsewhere. In that study, locus D6S260 gave the highest pairwise LOD score, 3.5, allowing for locus heterogeneity. Analysis with D6S260 and the distal locus F13A1 yielded a multipoint LOD score of 3.9, which maximized by allowing for locus heterogeneity and assuming that 50% of families are linked to this region. Nonparametric affected-pedigree-member analyses also supported this finding. Several other groups have recently reported additional evidence suggesting linkage to this region, both in an expanded collection of Irish families and in families from a variety of other geographic locations. However, none of these studies succeed in narrowing the location of the putative disease gene beyond the {approximately}30-cM region first identified by Wang et al. 44 refs., 1 fig., 1 tab.« less

  1. Local network interconnection through a satellite point-to-multipoint link. Ph.D. Thesis - Ecole Nationale Superieure des Telecommunications, 6 Jul. 1985

    NASA Technical Reports Server (NTRS)

    Duarte, O. Muniz Bandeira

    1986-01-01

    Four architectures to implement a point to multipoint satellite link protocol for communication services offered by the Telecom 1 satellite network are presented. A safe communication service with error correction and flow control facilities is described. It is shown that a time transparent communication system combines simplicity and cost advantages.

  2. Multipoint propagators in cosmological gravitational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardeau, Francis; Crocce, Martin; Scoccimarro, Roman

    2008-11-15

    We introduce the concept of multipoint propagators between linear cosmic fields and their nonlinear counterparts in the context of cosmological perturbation theory. Such functions express how a nonlinearly evolved Fourier mode depends on the full ensemble of modes in the initial density field. We identify and resum the dominant diagrams in the large-k limit, showing explicitly that multipoint propagators decay into the nonlinear regime at the same rate as the two-point propagator. These analytic results generalize the large-k limit behavior of the two-point propagator to arbitrary order. We measure the three-point propagator as a function of triangle shape in numericalmore » simulations and confirm the results of our high-k resummation. We show that any n-point spectrum can be reconstructed from multipoint propagators, which leads to a physical connection between nonlinear corrections to the power spectrum at small scales and higher-order correlations at large scales. As a first application of these results, we calculate the reduced bispectrum at one loop in renormalized perturbation theory and show that we can predict the decrease in its dependence on triangle shape at redshift zero, when standard perturbation theory is least successful.« less

  3. Use of near-infrared spectroscopy and multipoint measurements for quality control of pharmaceutical drug products.

    PubMed

    Boiret, Mathieu; Chauchard, Fabien

    2017-01-01

    Near-infrared (NIR) spectroscopy is a non-destructive analytical technique that enables better-understanding and optimization of pharmaceutical processes and final drug products. The use in line is often limited by acquisition speed and sampling area. This work focuses on performing a multipoint measurement at high acquisition speed at the end of the manufacturing process on a conveyor belt system to control both the distribution and the content of active pharmaceutical ingredient within final drug products, i.e., tablets. A specially designed probe with several collection fibers was developed for this study. By measuring spectral and spatial information, it provides physical and chemical knowledge on the final drug product. The NIR probe was installed on a conveyor belt system that enables the analysis of a lot of tablets. The use of these NIR multipoint measurement probes on a conveyor belt system provided an innovative method that has the potential to be used as a new paradigm to ensure the drug product quality at the end of the manufacturing process and as a new analytical method for the real-time release control strategy. Graphical abstract Use of near-infrared spectroscopy and multipoint measurements for quality control of pharmaceutical drug products.

  4. Land cover mapping after the tsunami event over Nanggroe Aceh Darussalam (NAD) province, Indonesia

    NASA Astrophysics Data System (ADS)

    Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Alias, A. N.; Mohd. Saleh, N.; Wong, C. J.; Surbakti, M. S.

    2008-03-01

    Remote sensing offers an important means of detecting and analyzing temporal changes occurring in our landscape. This research used remote sensing to quantify land use/land cover changes at the Nanggroe Aceh Darussalam (Nad) province, Indonesia on a regional scale. The objective of this paper is to assess the changed produced from the analysis of Landsat TM data. A Landsat TM image was used to develop land cover classification map for the 27 March 2005. Four supervised classifications techniques (Maximum Likelihood, Minimum Distance-to- Mean, Parallelepiped and Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier) were performed to the satellite image. Training sites and accuracy assessment were needed for supervised classification techniques. The training sites were established using polygons based on the colour image. High detection accuracy (>80%) and overall Kappa (>0.80) were achieved by the Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier in this study. This preliminary study has produced a promising result. This indicates that land cover mapping can be carried out using remote sensing classification method of the satellite digital imagery.

  5. Absolute continuity for operator valued completely positive maps on C∗-algebras

    NASA Astrophysics Data System (ADS)

    Gheondea, Aurelian; Kavruk, Ali Şamil

    2009-02-01

    Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.

  6. Multipoint Ignition of a Gas Mixture by a Microwave Subcritical Discharge with an Extended Streamer Structure

    NASA Astrophysics Data System (ADS)

    Aleksandrov, K. V.; Busleev, N. I.; Grachev, L. P.; Esakov, I. I.; Ravaev, A. A.

    2018-02-01

    The results of experimental studies on using an electrical discharge with an extended streamer structure in a quasioptical microwave beam in the multipoint ignition of a propane-air mixture have been reported. The pulsed microwave discharge was initiated at the interior surface of a quartz tube that was filled with the mentioned flammable mixture and introduced into a microwave beam with a subbreakdown initial field. Gas breakdown was initiated by an electromagnetic vibrator. The dependence of the type of discharge on the microwave field strength was examined, the lower concentration threshold of ignition of the propane-air mixture by the studied discharge was determined, and the dynamics of combustion of the flammable mixture with local and multipoint ignition were compared.

  7. Autosomal recessive spastic paraplegia (SPG30) with mild ataxia and sensory neuropathy maps to chromosome 2q37.3.

    PubMed

    Klebe, Stephan; Azzedine, Hamid; Durr, Alexandra; Bastien, Patrick; Bouslam, Naima; Elleuch, Nizar; Forlani, Sylvie; Charon, Celine; Koenig, Michel; Melki, Judith; Brice, Alexis; Stevanin, Giovanni

    2006-06-01

    The hereditary spastic paraplegias (HSPs) are a clinically and genetically heterogeneous group of neurodegenerative diseases characterized by progressive spasticity in the lower limbs. Twenty-nine different loci (SPG) have been mapped so far, and 11 responsible genes have been identified. Clinically, one distinguishes between pure and complex HSP forms which are variably associated with numerous combinations of neurological and extra-neurological signs. Less is known about autosomal recessive forms (ARHSP) since the mapped loci have been identified often in single families and account for only a small percentage of patients. We report a new ARHSP locus (SPG30) on chromosome 2q37.3 in a consanguineous family with seven unaffected and four affected members of Algerian origin living in Eastern France with a significant multipoint lod score of 3.8. Ten other families from France (n = 4), Tunisia (n = 2), Algeria (n = 3) and the Czech Republic (n = 1) were not linked to the newly identified locus thus demonstrating further genetic heterogeneity. The phenotype of the linked family consists of spastic paraparesis and peripheral neuropathy associated with slight cerebellar signs confirmed by cerebellar atrophy on one CT scan.

  8. Spectral analysis of 87-lead body surface signal-averaged ECGs in patients with previous anterior myocardial infarction as a marker of ventricular tachycardia.

    PubMed

    Hosoya, Y; Kubota, I; Shibata, T; Yamaki, M; Ikeda, K; Tomoike, H

    1992-06-01

    There were few studies on the relation between the body surface distribution of high- and low-frequency components within the QRS complex and ventricular tachycardia (VT). Eighty-seven signal-averaged ECGs were obtained from 30 normal subjects (N group) and 30 patients with previous anterior myocardial infarction (MI) with VT (MI-VT[+] group, n = 10) or without VT (MI-VT[-] group, n = 20). The onset and offset of the QRS complex were determined from 87-lead root mean square values computed from the averaged (but not filtered) ECG waveforms. Fast Fourier transform analysis was performed on signal-averaged ECG. The resulting Fourier coefficients were attenuated by use of the transfer function, and then inverse transform was done with five frequency ranges (0-25, 25-40, 40-80, 80-150, and 150-250 Hz). From the QRS onset to the QRS offset, the time integration of the absolute value of reconstructed waveforms was calculated for each of the five frequency ranges. The body surface distributions of these areas were expressed as QRS area maps. The maximal values of QRS area maps were compared among the three groups. In the frequency ranges of 0-25 and 150-250 Hz, there were no significant differences in the maximal values among these three groups. Both MI groups had significantly smaller maximal values of QRS area maps in the frequency ranges of 25-40 and 40-80 Hz compared with the N group. The MI-VT(+) group had significantly smaller maximal values in the frequency ranges of 40-80 and 80-150 Hz than the MI-VT(-) group. These three groups were clearly differentiated by the maximal values of the 40-80-Hz QRS area map. It was suggested that the maximal value of the 40-80-Hz QRS area map was a new marker for VT after anterior MI.

  9. A more powerful exact test of noninferiority from binary matched-pairs data.

    PubMed

    Lloyd, Chris J; Moldovan, Max V

    2008-08-15

    Assessing the therapeutic noninferiority of one medical treatment compared with another is often based on the difference in response rates from a matched binary pairs design. This paper develops a new exact unconditional test for noninferiority that is more powerful than available alternatives. There are two new elements presented in this paper. First, we introduce the likelihood ratio statistic as an alternative to the previously proposed score statistic of Nam (Biometrics 1997; 53:1422-1430). Second, we eliminate the nuisance parameter by estimation followed by maximization as an alternative to the partial maximization of Berger and Boos (Am. Stat. Assoc. 1994; 89:1012-1016) or traditional full maximization. Based on an extensive numerical study, we recommend tests based on the score statistic, the nuisance parameter being controlled by estimation followed by maximization. 2008 John Wiley & Sons, Ltd

  10. Filtered maximum likelihood expectation maximization based global reconstruction for bioluminescence tomography.

    PubMed

    Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli

    2018-05-17

    The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.

  11. A Bayesian-Based Novel Methodology to Generate Reliable Site Response Mapping Sensitive to Data Uncertainties

    NASA Astrophysics Data System (ADS)

    Chakraborty, A.; Goto, H.

    2017-12-01

    The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.

  12. Mapping Relative Likelihood for the Presence of Naturally Occurring Asbestos in Placer and Eastern Sacramento Counties, California

    NASA Astrophysics Data System (ADS)

    Higgins, C. T.; Clinkenbeard, J. P.; Churchill, R. K.

    2006-12-01

    Naturally occurring asbestos (NOA) is a term applied to the geologic occurrence of six types of silicate minerals that have asbestiform habit. These include the serpentine mineral chrysotile and the amphibole minerals actinolite, amosite, anthophyllite, crocidolite, and tremolite; all are classified as known human carcinogens. NOA, which is likely to be present in at least 50 of the 58 counties of California, is most commonly associated with serpentinite, but has been identified in other geologic settings as well. Because of health concerns, knowledge of where NOA may be present is important to regulatory agencies and the public. To improve this knowledge, the California Geological Survey (CGS) has prepared NOA maps of Placer County and eastern Sacramento County; both counties contain geologic settings where NOA has been observed. The maps are based primarily on geologic information compiled and interpreted from existing geologic and soils maps and on limited fieldwork. The system of map units is modified from an earlier one developed by the CGS for an NOA map of nearby western El Dorado County. In the current system, the counties are subdivided into different areas based on relative likelihood for the presence of NOA. Three types of areas are defined as most likely, moderately likely, and least likely to contain NOA. A fourth type is defined as areas of faulting and shearing; these geologic structures may locally increase the likelihood for the presence of NOA within or adjacent to areas most likely or moderately likely to contain NOA. The maps do not indicate if NOA is present or absent in bedrock or soils at any particular location. Local air pollution control districts are using the maps to help determine where to minimize generation of and exposure to dust that may contain NOA. The maps and accompanying reports can be viewed at http://www.consrv.ca.gov/cgs/ under Hazardous Minerals.

  13. Integrated Sensing Using DNA Nanoarchitectures

    DTIC Science & Technology

    2014-05-20

    Norton. Thiolated Dendrimers as Multi-Point Binding Headgroups for DNA Immobilization on Gold, Langmuir, (10 2011): 0. doi: 10.1021/la202444s...Figure 6, uses dendrimers to provide multipoint adhesion of a single stranded DNA component on a surface. Figure 6 Process for immobilizing... dendrimer (shown as a round species). These dendrimer species are Generation 3 PAMAM dendrimers with ~ 30 thiol groups to bind the dendrimer /DNA construct

  14. Stability analysis of multipoint tool equipped with metal cutting ceramics

    NASA Astrophysics Data System (ADS)

    Maksarov, V. V.; Khalimonenko, A. D.; Matrenichev, K. G.

    2017-10-01

    The article highlights the issues of determining the stability of the cutting process by a multipoint cutting tool equipped with cutting ceramics. There were some recommendations offered on the choice of parameters of replaceable cutting ceramic plates for milling based of the conducted researches. Ceramic plates for milling are proposed to be selected on the basis of value of their electrical volume resistivity.

  15. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  16. Integrating Entropy-Based Naïve Bayes and GIS for Spatial Evaluation of Flood Hazard.

    PubMed

    Liu, Rui; Chen, Yun; Wu, Jianping; Gao, Lei; Barrett, Damian; Xu, Tingbao; Li, Xiaojuan; Li, Linyi; Huang, Chang; Yu, Jia

    2017-04-01

    Regional flood risk caused by intensive rainfall under extreme climate conditions has increasingly attracted global attention. Mapping and evaluation of flood hazard are vital parts in flood risk assessment. This study develops an integrated framework for estimating spatial likelihood of flood hazard by coupling weighted naïve Bayes (WNB), geographic information system, and remote sensing. The north part of Fitzroy River Basin in Queensland, Australia, was selected as a case study site. The environmental indices, including extreme rainfall, evapotranspiration, net-water index, soil water retention, elevation, slope, drainage proximity, and density, were generated from spatial data representing climate, soil, vegetation, hydrology, and topography. These indices were weighted using the statistics-based entropy method. The weighted indices were input into the WNB-based model to delineate a regional flood risk map that indicates the likelihood of flood occurrence. The resultant map was validated by the maximum inundation extent extracted from moderate resolution imaging spectroradiometer (MODIS) imagery. The evaluation results, including mapping and evaluation of the distribution of flood hazard, are helpful in guiding flood inundation disaster responses for the region. The novel approach presented consists of weighted grid data, image-based sampling and validation, cell-by-cell probability inferring and spatial mapping. It is superior to an existing spatial naive Bayes (NB) method for regional flood hazard assessment. It can also be extended to other likelihood-related environmental hazard studies. © 2016 Society for Risk Analysis.

  17. Expectation Maximization Algorithm for Box-Cox Transformation Cure Rate Model and Assessment of Model Misspecification Under Weibull Lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, Narayanaswamy

    2018-05-01

    In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.

  18. Workload, Performance, and Reliability of Digital Computing Systems.

    DTIC Science & Technology

    1980-12-01

    maximized is then P()(tf, ...... , ad .[[. P(ts) h( tf1 ),) (0 42.33) Note that this problem is equivalent to minimazing the function En Jn ,l~~f,) (-X) 1...have been observed at times tf1 , after observing the system since tsl. Take, for instance, the case of the distribution obtained from the simplified...represented by a set of pairs [ts,, tf1 ] i 1 ...... n the maximum likelihood values of a,y,P are these values that maximize the function p(n)( tf1 -ts 1 . . ts

  19. Warehouse multipoint temperature and humidity monitoring system design based on Kingview

    NASA Astrophysics Data System (ADS)

    Ou, Yanghui; Wang, Xifu; Liu, Jingyun

    2017-04-01

    Storage is the key link of modern logistics. Warehouse environment monitoring is an important part of storage safety management. To meet the storage requirements of different materials, guarantee their quality in the greatest extent, which has great significance. In the warehouse environment monitoring, the most important parameters are air temperature and relative humidity. In this paper, a design of warehouse multipoint temperature and humidity monitoring system based on King view, which realizes the multipoint temperature and humidity data real-time acquisition, monitoring and storage in warehouse by using temperature and humidity sensor. Also, this paper will take the bulk grain warehouse as an example and based on the data collected in real-time monitoring, giving the corresponding expert advice that combined with the corresponding algorithm, providing theoretical guidance to control the temperature and humidity in grain warehouse.

  20. A Robust Distributed Multipoint Fiber Optic Gas Sensor System Based on AGC Amplifier Structure.

    PubMed

    Zhu, Cunguang; Wang, Rende; Tao, Xuechen; Wang, Guangwei; Wang, Pengpeng

    2016-07-28

    A harsh environment-oriented distributed multipoint fiber optic gas sensor system realized by automatic gain control (AGC) technology is proposed. To improve the photoelectric signal reliability, the electronic variable gain can be modified in real time by an AGC closed-loop feedback structure to compensate for optical transmission loss which is caused by the fiber bend loss or other reasons. The deviation of the system based on AGC structure is below 4.02% when photoelectric signal decays due to fiber bending loss for bending radius of 5 mm, which is 20 times lower than the ordinary differential system. In addition, the AGC circuit with the same electric parameters can keep the baseline intensity of signals in different channels of the distributed multipoint sensor system at the same level. This avoids repetitive calibrations and streamlines the installation process.

  1. Adolescents' Sexual Behavior and Academic Attainment

    ERIC Educational Resources Information Center

    Frisco, Michelle L.

    2008-01-01

    High school students have high ambitions but do not always make choices that maximize their likelihood of educational success. This was the motivation for investigating the relationships between high school sexual behavior and two important milestones in academic attainment: earning a high school diploma and enrolling in distinct postsecondary…

  2. 76 FR 77452 - Advisory Circular for Stall and Stick Pusher Training

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ... a recovery from a stall or approach. Scenario-based training that includes realistic events that... training, testing, and checking recommendations designed to maximize the likelihood that pilots will... circular was developed based on a review of recommended practices developed by major aircraft manufacturers...

  3. Public Data Set: Erratum: "Multi-point, high-speed passive ion velocity distribution diagnostic on the Pegasus Toroidal Experiment" [Rev. Sci. Instrum. 83, 10D516 (2012)

    DOE Data Explorer

    Burke, Marcus G. [University of Wisconsin-Madison] (ORCID:0000000176193724); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448); Winz, Gregory R. [University of Wisconsin-Madison] (ORCID:0000000177627184)

    2016-07-18

    This data set contains openly-documented, machine readable digital research data corresponding to figures published in M.G. Burke et al., 'Erratum: "Multi-point, high-speed passive ion velocity distribution diagnostic on the Pegasus Toroidal Experiment" [Rev. Sci. Instrum. 83, 10D516 (2012)],' Rev. Sci. Instrum. 87, 079902 (2016).

  4. Reliable Wireless Broadcast with Linear Network Coding for Multipoint-to-Multipoint Real-Time Communications

    NASA Astrophysics Data System (ADS)

    Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi

    This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.

  5. Site-specific multipoint fluorescence measurement system with end-capped optical fibers.

    PubMed

    Song, Woosub; Moon, Sucbei; Lee, Byoung-Cheol; Park, Chul-Seung; Kim, Dug Young; Kwon, Hyuk Sang

    2011-07-10

    We present the development and implementation of a spatially and spectrally resolved multipoint fluorescence correlation spectroscopy (FCS) system utilizing multiple end-capped optical fibers and an inexpensive laser source. Specially prepared end-capped optical fibers placed in an image plane were used to both collect fluorescence signals from the sample and to deliver signals to the detectors. The placement of independently selected optical fibers on the image plane was done by monitoring the end-capped fiber tips at the focus using a CCD, and fluorescence from specific positions of a sample were collected by an end-capped fiber, which could accurately represent light intensities or spectral data without incurring any disturbance. A fast multipoint spectroscopy system with a time resolution of ∼1.5 ms was then implemented using a prism and an electron multiplying charge coupled device with a pixel binning for the region of interest. The accuracy of our proposed system was subsequently confirmed by experimental results, based on an FCS analysis of microspheres in distilled water. We expect that the proposed multipoint site-specific fluorescence measurement system can be used as an inexpensive fluorescence measurement tool to study many intracellular and molecular dynamics in cell biology. © 2011 Optical Society of America

  6. NAVIS-An UGV Indoor Positioning System Using Laser Scan Matching for Large-Area Real-Time Applications

    PubMed Central

    Tang, Jian.; Chen, Yuwei.; Jaakkola, Anttoni.; Liu, Jinbing.; Hyyppä, Juha.; Hyyppä, Hannu.

    2014-01-01

    Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application. PMID:24999715

  7. NAVIS-An UGV indoor positioning system using laser scan matching for large-area real-time applications.

    PubMed

    Tang, Jian; Chen, Yuwei; Jaakkola, Anttoni; Liu, Jinbing; Hyyppä, Juha; Hyyppä, Hannu

    2014-07-04

    Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application.

  8. Attenuation correction strategies for multi-energy photon emitters using SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pretorius, P.H.; King, M.A.; Pan, T.S.

    1996-12-31

    The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojectionmore » (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR`s of sphere 4 were under-estimated, although TCR`s were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately.« less

  9. Quantum coherence generating power, maximally abelian subalgebras, and Grassmannian geometry

    NASA Astrophysics Data System (ADS)

    Zanardi, Paolo; Campos Venuti, Lorenzo

    2018-01-01

    We establish a direct connection between the power of a unitary map in d-dimensions (d < ∞) to generate quantum coherence and the geometry of the set Md of maximally abelian subalgebras (of the quantum system full operator algebra). This set can be seen as a topologically non-trivial subset of the Grassmannian over linear operators. The natural distance over the Grassmannian induces a metric structure on Md, which quantifies the lack of commutativity between the pairs of subalgebras. Given a maximally abelian subalgebra, one can define, on physical grounds, an associated measure of quantum coherence. We show that the average quantum coherence generated by a unitary map acting on a uniform ensemble of quantum states in the algebra (the so-called coherence generating power of the map) is proportional to the distance between a pair of maximally abelian subalgebras in Md connected by the unitary transformation itself. By embedding the Grassmannian into a projective space, one can pull-back the standard Fubini-Study metric on Md and define in this way novel geometrical measures of quantum coherence generating power. We also briefly discuss the associated differential metric structures.

  10. Improving estimates of genetic maps: a meta-analysis-based approach.

    PubMed

    Stewart, William C L

    2007-07-01

    Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.

  11. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  12. Mapping Quantitative Traits in Unselected Families: Algorithms and Examples

    PubMed Central

    Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David

    2009-01-01

    Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016

  13. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  14. Localization of the Netherton Syndrome Gene to Chromosome 5q32, by Linkage Analysis and Homozygosity Mapping

    PubMed Central

    Chavanas, Stéphane; Garner, Chad; Bodemer, Christine; Ali, Mohsin; Teillac, Dominique Hamel-; Wilkinson, John; Bonafé, Jean-Louis; Paradisi, Mauro; Kelsell, David P.; Ansai, Shin-ichi; Mitsuhashi, Yoshihiko; Larrègue, Marc; Leigh, Irene M.; Harper, John I.; Taïeb, Alain; Prost, Yves de; Cardon, Lon R.; Hovnanian, Alain

    2000-01-01

    Netherton syndrome (NS [MIM 256500]) is a rare and severe autosomal recessive disorder characterized by congenital ichthyosis, a specific hair-shaft defect (trichorrhexis invaginata), and atopic manifestations. Infants with this syndrome often fail to thrive; life-threatening complications result in high postnatal mortality. We report the assignment of the NS gene to chromosome 5q32, by linkage analysis and homozygosity mapping in 20 families affected with NS. Significant evidence for linkage (maximum multipoint LOD score 10.11) between markers D5S2017 and D5S413 was obtained, with no evidence for locus heterogeneity. Analysis of critical recombinants mapped the NS locus between markers D5S463 and D5S2013, within an <3.5-cM genetic interval. The NS locus is telomeric to the cytokine gene cluster in 5q31. The five known genes encoding casein kinase Iα, the α subunit of retinal rod cGMP phosphodiesterase, the regulator of mitotic-spindle assembly, adrenergic receptor β2, and the diastrophic dysplasia sulfate–transporter gene, as well as the 38 expressed-sequence tags mapped within the critical region, are not obvious candidates. Our study is the first step toward the positional cloning of the NS gene. This finding promises a better understanding of the molecular mechanisms that control epidermal differentiation and immunity. PMID:10712206

  15. A case study for the integration of predictive mineral potential maps

    NASA Astrophysics Data System (ADS)

    Lee, Saro; Oh, Hyun-Joo; Heo, Chul-Ho; Park, Inhye

    2014-09-01

    This study aims to elaborate on the mineral potential maps using various models and verify the accuracy for the epithermal gold (Au) — silver (Ag) deposits in a Geographic Information System (GIS) environment assuming that all deposits shared a common genesis. The maps of potential Au and Ag deposits were produced by geological data in Taebaeksan mineralized area, Korea. The methodological framework consists of three main steps: 1) identification of spatial relationships 2) quantification of such relationships and 3) combination of multiple quantified relationships. A spatial database containing 46 Au-Ag deposits was constructed using GIS. The spatial association between training deposits and 26 related factors were identified and quantified by probabilistic and statistical modelling. The mineral potential maps were generated by integrating all factors using the overlay method and recombined afterwards using the likelihood ratio model. They were verified by comparison with test mineral deposit locations. The verification revealed that the combined mineral potential map had the greatest accuracy (83.97%), whereas it was 72.24%, 65.85%, 72.23% and 71.02% for the likelihood ratio, weight of evidence, logistic regression and artificial neural network models, respectively. The mineral potential map can provide useful information for the mineral resource development.

  16. A canonical state-space representation for SISO systems using multipoint Jordan CFE. [Continued-Fraction Expansion

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Guo, Tong-Yi; Shieh, Leang-San

    1991-01-01

    A canonical state-space realization based on the multipoint Jordan continued-fraction expansion (CFE) is presented for single-input-single-output (SISO) systems. The similarity transformation matrix which relates the new canonical form to the phase-variable canonical form is also derived. The presented canonical state-space representation is particularly attractive for the application of SISO system theory in which a reduced-dimensional time-domain model is necessary.

  17. Free choice access to multipoint wellness education and related services positively impacts employee wellness: a randomized and controlled trial.

    PubMed

    Sforzo, Gary A; Kaye, Miranda P; Calleri, David; Ngai, Nancy

    2012-04-01

    Examine effects of voluntary participation in employer-sponsored, multipoint wellness education programming on employee wellness. A randomized and controlled design was used to organize 96 participants into an education + access group; an access-only group, and control group. Outcome measures were made at start and end of a 12-week intervention period. Education + access improved wellness knowledge, which, in turn, enhanced life satisfaction, employee morale, and energy, and nearly improved stress level. Those who received facility access without educational programming did not reap health benefits. Employees voluntarily used the fitness facility and healthy meal cards only 1.3 and 1.5 times per week, respectively. Participants made limited and likely inadequate use of wellness opportunities. As a result, physical health benefits (eg, blood pressure, fitness parameters) were not seen in the present study. However, multipoint wellness education resulted in psychosocial health benefits in 12 weeks.

  18. Remote Sensing of the Reconnection Electric Field From In Situ Multipoint Observations of the Separatrix Boundary

    NASA Astrophysics Data System (ADS)

    Nakamura, T. K. M.; Nakamura, R.; Varsani, A.; Genestreti, K. J.; Baumjohann, W.; Liu, Y.-H.

    2018-05-01

    A remote sensing technique to infer the local reconnection electric field based on in situ multipoint spacecraft observation at the reconnection separatrix is proposed. In this technique, the increment of the reconnected magnetic flux is estimated by integrating the in-plane magnetic field during the sequential observation of the separatrix boundary by multipoint measurements. We tested this technique by applying it to virtual observations in a two-dimensional fully kinetic particle-in-cell simulation of magnetic reconnection without a guide field and confirmed that the estimated reconnection electric field indeed agrees well with the exact value computed at the X-line. We then applied this technique to an event observed by the Magnetospheric Multiscale mission when crossing an energetic plasma sheet boundary layer during an intense substorm. The estimated reconnection electric field for this event is nearly 1 order of magnitude higher than a typical value of magnetotail reconnection.

  19. A segmentation/clustering model for the analysis of array CGH data.

    PubMed

    Picard, F; Robin, S; Lebarbier, E; Daudin, J-J

    2007-09-01

    Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.

  20. A single-index threshold Cox proportional hazard model for identifying a treatment-sensitive subset based on multiple biomarkers.

    PubMed

    He, Ye; Lin, Huazhen; Tu, Dongsheng

    2018-06-04

    In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  2. X-linked borderline mental retardation with prominent behavioral disturbance: Phenotype, genetic localization, and evidence for disturbed monoamine metabolism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunner, H.G.; Nelen, M.R.; Zandvoort, P. van

    The authors have identified a large Dutch kindred with a new form of X-linked nondysmorphic mild mental retardation. All affected males in this family show very characteristic abnormal behavior, in particular aggressive and sometimes violent behavior. Other types of impulsive behavior include arson, attempted rape, and exhibitionism. Attempted suicide has been reported in a single case. The locus for this disorder could be assigned to the Xp11-21 interval between DXS7 and DXS77 by linkage analysis using markers spanning the X chromosome. A maximal multipoint lod score of 3.69 was obtained at the monoamine oxidase type A (MAOA) monoamine metabolism. Thesemore » data are compatible with a primary defect in the structural gene for MAOA and/or monoamine oxidase type B (MAOB). Normal platelet MAOB activity suggests that the unusual behavior pattern in this family may be caused by isolated MAOA deficiency. 34 refs., 4 figs., 4 tabs.« less

  3. Stabilization by multipoint covalent attachment of a biocatalyst with polygalacturonase activity used for juice clarification.

    PubMed

    Ramírez Tapias, Yuly A; Rivero, Cintia W; Gallego, Fernando López; Guisán, José M; Trelles, Jorge A

    2016-10-01

    Derivatized-agarose supports are suitable for enzyme immobilization by different methods, taking advantage of different physical, chemical and biological conditions of the protein and the support. In this study, agarose particles were modified with MANAE, PEI and glyoxyl groups and evaluated to stabilize polygalacturonase from Streptomyces halstedii ATCC 10897. A new immobilized biocatalyst was developed using glyoxyl-agarose as support; it exhibited high performance in degrading polygalacturonic acid and releasing oligogalacturonides. Maximal enzyme activity was detected at 5h of reaction using 0.05g/mL of immobilized biocatalyst, which released 3mg/mL of reducing sugars and allowed the highest product yield conversion and increased stability. These results are very favorable for pectin degradation with reusability up to 18 successive reactions (90h) and application in juice clarification. Plum (4.7°Bx) and grape (10.6°Bx) juices were successfully clarified, increasing reducing sugars content and markedly decreasing turbidity and viscosity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  5. Automatic physical inference with information maximizing neural networks

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.

  6. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  7. Map and map database of susceptibility to slope failure by sliding and earthflow in the Oakland area, California

    USGS Publications Warehouse

    Pike, R.J.; Graymer, R.W.; Roberts, Sebastian; Kalman, N.B.; Sobieszczyk, Steven

    2001-01-01

    Map data that predict the varying likelihood of landsliding can help public agencies make informed decisions on land use and zoning. This map, prepared in a geographic information system from a statistical model, estimates the relative likelihood of local slopes to fail by two processes common to an area of diverse geology, terrain, and land use centered on metropolitan Oakland. The model combines the following spatial data: (1) 120 bedrock and surficial geologic-map units, (2) ground slope calculated from a 30-m digital elevation model, (3) an inventory of 6,714 old landslide deposits (not distinguished by age or type of movement and excluding debris flows), and (4) the locations of 1,192 post-1970 landslides that damaged the built environment. The resulting index of likelihood, or susceptibility, plotted as a 1:50,000-scale map, is computed as a continuous variable over a large area (872 km2) at a comparatively fine (30 m) resolution. This new model complements landslide inventories by estimating susceptibility between existing landslide deposits, and improves upon prior susceptibility maps by quantifying the degree of susceptibility within those deposits. Susceptibility is defined for each geologic-map unit as the spatial frequency (areal percentage) of terrain occupied by old landslide deposits, adjusted locally by steepness of the topography. Susceptibility of terrain between the old landslide deposits is read directly from a slope histogram for each geologic-map unit, as the percentage (0.00 to 0.90) of 30-m cells in each one-degree slope interval that coincides with the deposits. Susceptibility within landslide deposits (0.00 to 1.33) is this same percentage raised by a multiplier (1.33) derived from the comparative frequency of recent failures within and outside the old deposits. Positive results from two evaluations of the model encourage its extension to the 10-county San Francisco Bay region and elsewhere. A similar map could be prepared for any area where the three basic constituents, a geologic map, a landslide inventory, and a slope map, are available in digital form. Added predictive power of the new susceptibility model may reside in attributes that remain to be explored?among them seismic shaking, distance to nearest road, and terrain elevation, aspect, relief, and curvature.

  8. Temperature Map of the Perseus Cluster of Galaxies Observed with ASCA

    NASA Technical Reports Server (NTRS)

    Furusho, T.; Yamasaki, N. Y.; Ohashi, T.; Shibata, R.; Ezawa, H.; White, Nicholas E. (Technical Monitor)

    2000-01-01

    We present two-dimensional temperature map of the Perseus cluster based on multi-pointing observations with the Advanced Spacecraft for Cosmology Astrophysics (ASCA) Gas Imaging Spectrometer (GIS), covering a region with a diameter of approximately 2 deg. By correcting for the effect of the X-ray telescope response, the temperatures were estimated from hardness ratios and the complete temperature structure of the cluster with a spatial resolution of about 100 kpc was obtained for the first time. There is an extended cool region with a diameter of approximately 20 arcmin and kT approx. 5 keV at about 20 arcmin east from the cluster center. This region also shows higher surface brightness and is surrounded by a large ring-like hot region with kT approx. > 7 keV, and likely to be a remnant of a merger with a poor cluster. Another extended cool region is extending outward from the IC 310 subcluster. These features and the presence of several other hot and cool blobs suggest that this rich cluster has been formed as a result of a repetition of many subcluster mergers.

  9. Linkage analysis of the Fanconi anemia gene FACC with chromosome 9q markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auerbach, A.D.; Shin, H.T.; Kaporis, A.G.

    1994-09-01

    Fanconi anemia (FA) is a genetically heterogeneous syndrome, with at least four different complementation groups as determined by cell fusion studies. The gene for complementation group C, FACC, has been cloned and mapped to chromosome 9q22.3 by in situ hybridization, while linkage analysis has supported the placement of another gene on chromosome 20q. We have analyzed five microsatellite markers and one RFLP on chromosome 9q in a panel of FA families from the International Fanconi Anemia Registry (IFAR) in order to place FACC on the genetic map. Polymorphisms were typed in 308 individuals from 51 families. FACC is tightly linkedmore » to both D9S151 [{Theta}{sub max}=0.025, Z{sub max}=7.75] and to D9S196 [{Theta}{sub max}=0.041, Z{sub max}=7.89]; multipoint analysis is in progress. We are currently screening a YAC clone that contains the entire FACC gene for additional microsatellite markers suitable for haplotype analysis of FA families.« less

  10. Identification of a novel locus for a USH3 like syndrome combined with congenital cataract.

    PubMed

    Dad, S; Østergaard, E; Thykjaer, T; Albrectsen, A; Ravn, K; Rosenberg, T; Møller, L B

    2010-10-01

    Usher syndrome (USH) is the most common genetic disease that causes both deafness and blindness. USH is divided into three types, USH1, USH2 and USH3, depending on the age of onset, the course of the disease, and on the degree of vestibular dysfunction. By homozygosity mapping of a consanguineous Danish family of Dutch descent, we have identified a novel locus for a rare USH3-like syndrome. The affected family members have a unique association of retinitis pigmentosa, progressive hearing impairment, vestibular dysfunction, and congenital cataract. The phenotype is similar, but not identical to that of USH3 patients, as congenital cataract has not been reported for USH3. By homozygosity mapping, we identified a 7.3 Mb locus on chromosome 15q22.2-23 with a maximum multipoint LOD score of 2.0. The locus partially overlaps with the USH1 locus, USH1H, a novel unnamed USH2 locus, and the non-syndromic deafness locus DFNB48. © 2010 John Wiley & Sons A/S.

  11. An analogy of the charge distribution on Julia sets with the Brownian motion

    NASA Astrophysics Data System (ADS)

    Lopes, Artur O.

    1989-09-01

    A way to compute the entropy of an invariant measure of a hyperbolic rational map from the information given by a Ruelle-Perron-Frobenius operator of a generic Holder-continuous function will be shown. This result was motivated by an analogy of the Brownian motion with the dynamical system given by a rational map and the maximal measure. In the case the rational map is a polynomial, then the maximal measure is the charge distribution in the Julia set. The main theorem of this paper can be seen as a large deviation result. It is a kind of Donsker-Varadhan formula for dynamical systems.

  12. STE/ICE (Simplified Test Equipment/Internal Combustion Engines) Design Guide for Vehicle Diagnostic Connector Assemblies

    DTIC Science & Technology

    1982-08-01

    19 3.2 Diesel Engine Speed Transducer 20 3.3 Pressure Transducer 20 3.4 Temperature Transducer 22 3.5 Differential Pressure Switch 22 3.6 Differential... Pressure Switch , Multi-Point 22 3.7 Current Measurement Transducer 23 - 3.8 Electrolyte Level Probes 23 3.9 Diagnostic Connector 24 3.10 Harness...12258933 Differential Pressure Switch - Multi-point 12258934 K -. Differential Pressure Switch 12258938 Electrolyte Level Sensor 12258935 Shunt 1000

  13. Multipoint propagators for non-Gaussian initial conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardeau, Francis; Sefusatti, Emiliano; Crocce, Martin

    2010-10-15

    We show here how renormalized perturbation theory calculations applied to the quasilinear growth of the large-scale structure can be carried on in presence of primordial non-Gaussian (PNG) initial conditions. It is explicitly demonstrated that the series reordering scheme proposed in Bernardeau, Crocce, and Scoccimarro [Phys. Rev. D 78, 103521 (2008)] is preserved for non-Gaussian initial conditions. This scheme applies to the power spectrum and higher-order spectra and is based on a reorganization of the contributing terms into the sum of products of multipoint propagators. In case of PNG, new contributing terms appear, the importance of which is discussed in themore » context of current PNG models. The properties of the building blocks of such resummation schemes, the multipoint propagators, are then investigated. It is first remarked that their expressions are left unchanged at one-loop order irrespective of statistical properties of the initial field. We furthermore show that the high-momentum limit of each of these propagators can be explicitly computed even for arbitrary initial conditions. They are found to be damped by an exponential cutoff whose expression is directly related to the moment generating function of the one-dimensional displacement field. This extends what had been established for multipoint propagators for Gaussian initial conditions. Numerical forms of the cutoff are shown for the so-called local model of PNG.« less

  14. Traceability of pH measurements by glass electrode cells: performance characteristic of pH electrodes by multi-point calibration.

    PubMed

    Naumann, R; Alexander-Weber, Ch; Eberhardt, R; Giera, J; Spitzer, P

    2002-11-01

    Routine pH measurements are carried out with pH meter-glass electrode assemblies. In most cases the glass and reference electrodes are thereby fashioned into a single probe, the so-called 'combination electrode' or simply 'the pH electrode'. The use of these electrodes is subject to various effects, described below, producing uncertainties of unknown magnitude. Therefore, the measurement of pH of a sample requires a suitable calibration by certified standard buffer solutions (CRMs) traceable to primary pH standards. The procedures in use are based on calibrations at one point, at two points bracketing the sample pH and at a series of points, the so-called multi-point calibration. The multi-point calibration (MPC) is recommended if minimum uncertainty and maximum consistency are required over a wide range of unknown pH values. Details of uncertainty computations for the two-point and MPC procedure are given. Furthermore, the multi-point calibration is a useful tool to characterise the performance of pH electrodes. This is demonstrated with different commercial pH electrodes. ELECTRONIC SUPPLEMENTARY MATERIAL is available if you access this article at http://dx.doi.org/10.1007/s00216-002-1506-5. On that page (frame on the left side), a link takes you directly to the supplementary material.

  15. Fragment assignment in the cloud with eXpress-D

    PubMed Central

    2013-01-01

    Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033

  16. Inverse Ising problem in continuous time: A latent variable approach

    NASA Astrophysics Data System (ADS)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  17. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  18. Effects of Missing Data Methods in Structural Equation Modeling with Nonnormal Longitudinal Data

    ERIC Educational Resources Information Center

    Shin, Tacksoo; Davison, Mark L.; Long, Jeffrey D.

    2009-01-01

    The purpose of this study is to investigate the effects of missing data techniques in longitudinal studies under diverse conditions. A Monte Carlo simulation examined the performance of 3 missing data methods in latent growth modeling: listwise deletion (LD), maximum likelihood estimation using the expectation and maximization algorithm with a…

  19. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    ERIC Educational Resources Information Center

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  20. Toward a New Diversity: Guidelines for a Staff Diversity/Affirmative Action Plan.

    ERIC Educational Resources Information Center

    California Community Colleges, Sacramento. Office of the Chancellor.

    These guidelines for California's community colleges specify required elements of a staff diversity/affirmative action plan, recommend sound practices and activities that will maximize the likelihood of success, and provide information on, and required elements of, related issues such as sexual harassment, handicap discrimination, and AIDS in the…

  1. The striking similarities between standard, distractor-free, and target-free recognition

    PubMed Central

    Dobbins, Ian G.

    2012-01-01

    It is often assumed that observers seek to maximize correct responding during recognition testing by actively adjusting a decision criterion. However, early research by Wallace (Journal of Experimental Psychology: Human Learning and Memory 4:441–452, 1978) suggested that recognition rates for studied items remained similar, regardless of whether or not the tests contained distractor items. We extended these findings across three experiments, addressing whether detection rates or observer confidence changed when participants were presented standard tests (targets and distractors) versus “pure-list” tests (lists composed entirely of targets or distractors). Even when observers were made aware of the composition of the pure-list test, the endorsement rates and confidence patterns remained largely similar to those observed during standard testing, suggesting that observers are typically not striving to maximize the likelihood of success across the test. We discuss the implications for decision models that assume a likelihood ratio versus a strength decision axis, as well as the implications for prior findings demonstrating large criterion shifts using target probability manipulations. PMID:21476108

  2. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  3. The sumLINK statistic for genetic linkage analysis in the presence of heterogeneity.

    PubMed

    Christensen, G B; Knight, S; Camp, N J

    2009-11-01

    We present the "sumLINK" statistic--the sum of multipoint LOD scores for the subset of pedigrees with nominally significant linkage evidence at a given locus--as an alternative to common methods to identify susceptibility loci in the presence of heterogeneity. We also suggest the "sumLOD" statistic (the sum of positive multipoint LOD scores) as a companion to the sumLINK. sumLINK analysis identifies genetic regions of extreme consistency across pedigrees without regard to negative evidence from unlinked or uninformative pedigrees. Significance is determined by an innovative permutation procedure based on genome shuffling that randomizes linkage information across pedigrees. This procedure for generating the empirical null distribution may be useful for other linkage-based statistics as well. Using 500 genome-wide analyses of simulated null data, we show that the genome shuffling procedure results in the correct type 1 error rates for both the sumLINK and sumLOD. The power of the statistics was tested using 100 sets of simulated genome-wide data from the alternative hypothesis from GAW13. Finally, we illustrate the statistics in an analysis of 190 aggressive prostate cancer pedigrees from the International Consortium for Prostate Cancer Genetics, where we identified a new susceptibility locus. We propose that the sumLINK and sumLOD are ideal for collaborative projects and meta-analyses, as they do not require any sharing of identifiable data between contributing institutions. Further, loci identified with the sumLINK have good potential for gene localization via statistical recombinant mapping, as, by definition, several linked pedigrees contribute to each peak.

  4. Mapping the defoliation potential of gypsy moth

    Treesearch

    David A. Gansner; Stanford L. Arner; Rachel Riemann Hershey; Susan L. King

    1993-01-01

    A model that uses forest stand characteristics to estimate the likelihood of gypsy moth (Lymantria dispar) defoliation has been developed. It was applied to recent forest inventory plot data to produce susceptibility ratings and a map showing defoliation potential for counties in Pennsylvania and six adjacent states on new frontiers of infestation.

  5. A LANDSAT study of ephemeral and perennial rangeland vegetation and soils

    NASA Technical Reports Server (NTRS)

    Bentley, R. G., Jr. (Principal Investigator); Salmon-Drexler, B. C.; Bonner, W. J.; Vincent, R. K.

    1976-01-01

    The author has identified the following significant results. Several methods of computer processing were applied to LANDSAT data for mapping vegetation characteristics of perennial rangeland in Montana and ephemeral rangeland in Arizona. The choice of optimal processing technique was dependent on prescribed mapping and site condition. Single channel level slicing and ratioing of channels were used for simple enhancement. Predictive models for mapping percent vegetation cover based on data from field spectra and LANDSAT data were generated by multiple linear regression of six unique LANDSAT spectral ratios. Ratio gating logic and maximum likelihood classification were applied successfully to recognize plant communities in Montana. Maximum likelihood classification did little to improve recognition of terrain features when compared to a single channel density slice in sparsely vegetated Arizona. LANDSAT was found to be more sensitive to differences between plant communities based on percentages of vigorous vegetation than to actual physical or spectral differences among plant species.

  6. Fuzzy fractals, chaos, and noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zardecki, A.

    1997-05-01

    To distinguish between chaotic and noisy processes, the authors analyze one- and two-dimensional chaotic mappings, supplemented by the additive noise terms. The predictive power of a fuzzy rule-based system allows one to distinguish ergodic and chaotic time series: in an ergodic series the likelihood of finding large numbers is small compared to the likelihood of finding them in a chaotic series. In the case of two dimensions, they consider the fractal fuzzy sets whose {alpha}-cuts are fractals, arising in the context of a quadratic mapping in the extended complex plane. In an example provided by the Julia set, the conceptmore » of Hausdorff dimension enables one to decide in favor of chaotic or noisy evolution.« less

  7. The network and transmission of based on the principle of laser multipoint communication

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Liu, Xianzhu; Jiang, Huilin; Hu, Yuan; Jiang, Lun

    2014-11-01

    Space laser communication is the perfectly choose to the earth integrated information backbone network in the future. This paper introduces the structure of the earth integrated information network that is a large capacity integrated high-speed broadband information network, a variety of communications platforms were densely interconnected together, such as the land, sea, air and deep air users or aircraft, the technologies of the intelligent high-speed processing, switching and routing were adopt. According to the principle of maximum effective comprehensive utilization of information resources, get accurately information, fast processing and efficient transmission through inter-satellite, satellite earth, sky and ground station and other links. Namely it will be a space-based, air-based and ground-based integrated information network. It will be started from the trends of laser communication. The current situation of laser multi-point communications were expounded, the transmission scheme of the dynamic multi-point between wireless laser communication n network has been carefully studied, a variety of laser communication network transmission schemes the corresponding characteristics and scope described in detail , described the optical multiplexer machine that based on the multiport form of communication is applied to relay backbone link; the optical multiplexer-based on the form of the segmentation receiver field of view is applied to small angle link, the optical multiplexer-based form of three concentric spheres structure is applied to short distances, motorized occasions, and the multi-point stitching structure based on the rotation paraboloid is applied to inter-satellite communications in detail. The multi-point laser communication terminal apparatus consist of the transmitting and receiving antenna, a relay optical system, the spectroscopic system, communication system and communication receiver transmitter system. The communication forms of optical multiplexer more than four goals or more, the ratio of received power and volume weight will be Obvious advantages, and can track multiple moving targets in flexible.It would to provide reference for the construction of earth integrated information networks.

  8. Reliable Classification of Geologic Surfaces Using Texture Analysis

    NASA Astrophysics Data System (ADS)

    Foil, G.; Howarth, D.; Abbey, W. J.; Bekker, D. L.; Castano, R.; Thompson, D. R.; Wagstaff, K.

    2012-12-01

    Communication delays and bandwidth constraints are major obstacles for remote exploration spacecraft. Due to such restrictions, spacecraft could make use of onboard science data analysis to maximize scientific gain, through capabilities such as the generation of bandwidth-efficient representative maps of scenes, autonomous instrument targeting to exploit targets of opportunity between communications, and downlink prioritization to ensure fast delivery of tactically-important data. Of particular importance to remote exploration is the precision of such methods and their ability to reliably reproduce consistent results in novel environments. Spacecraft resources are highly oversubscribed, so any onboard data analysis must provide a high degree of confidence in its assessment. The TextureCam project is constructing a "smart camera" that can analyze surface images to autonomously identify scientifically interesting targets and direct narrow field-of-view instruments. The TextureCam instrument incorporates onboard scene interpretation and mapping to assist these autonomous science activities. Computer vision algorithms map scenes such as those encountered during rover traverses. The approach, based on a machine learning strategy, trains a statistical model to recognize different geologic surface types and then classifies every pixel in a new scene according to these categories. We describe three methods for increasing the precision of the TextureCam instrument. The first uses ancillary data to segment challenging scenes into smaller regions having homogeneous properties. These subproblems are individually easier to solve, preventing uncertainty in one region from contaminating those that can be confidently classified. The second involves a Bayesian approach that maximizes the likelihood of correct classifications by abstaining from ambiguous ones. We evaluate these two techniques on a set of images acquired during field expeditions in the Mojave Desert. Finally, the algorithm was expanded to perform robust texture classification across a wide range of lighting conditions. We characterize both the increase in precision achieved using different input data representations as well as the range of conditions under which reliable performance can be achieved. An ensemble learning approach is used to increase performance by leveraging the illumination-dependent statistics of an image. Our results show that the three algorithmic modifications lead to a significant increase in classification performance as well as an increase in precision using an adjustable and human-understandable metric of confidence.

  9. Consumer preferences for beef color and packaging did not affect eating satisfaction.

    PubMed

    Carpenter, C E; Cornforth, D P; Whittier, D

    2001-04-01

    We investigated whether consumer preferences for beef colors (red, purple, and brown) or for beef packaging systems (modified atmosphere, MAP; vacuum skin pack, VSP; or overwrap with polyvinyl chloride, PVC) influenced taste scores of beef steaks and patties. To test beef color effects, boneless beef top loin steaks (choice) and ground beef patties (20% fat) were packaged in different atmospheres to promote development of red, purple, and brown color. To test effects of package type, steaks and patties were pre-treated with carbon monoxide in MAP to promote development of red color, and some meat was repackaged using VSP or PVC overwrap. The differently colored and packaged meats were separately displayed for members of four consumer panels who evaluated appearance and indicated their likelihood to purchase similar meat. Next, the panelists tasted meat samples from what they had been told were the packaging treatments just observed. However, the meat samples actually served were from a single untreated steak or patty. Thus, any difference in taste scores should reflect expectations established during the visual evaluation. The same ballot and sample coding were used for both the visual and taste evaluations. Color and packaging influenced (P<0.001) appearance scores and likelihood to purchase. Appearance scores were rated red>purple >brown and PVC >VSP>MAP. Appearance scores and likelihood to purchase were correlated (r=0.9). However, color or packaging did not affect (P>0.5) taste scores. Thus, consumer preferences for beef color and packaging influenced likelihood to purchase, but did not bias eating satisfaction.

  10. Scalar discrete nonlinear multipoint boundary value problems

    NASA Astrophysics Data System (ADS)

    Rodriguez, Jesus; Taylor, Padraic

    2007-06-01

    In this paper we provide sufficient conditions for the existence of solutions to scalar discrete nonlinear multipoint boundary value problems. By allowing more general boundary conditions and by imposing less restrictions on the nonlinearities, we obtain results that extend previous work in the area of discrete boundary value problems [Debra L. Etheridge, Jesus Rodriguez, Periodic solutions of nonlinear discrete-time systems, Appl. Anal. 62 (1996) 119-137; Debra L. Etheridge, Jesus Rodriguez, Scalar discrete nonlinear two-point boundary value problems, J. Difference Equ. Appl. 4 (1998) 127-144].

  11. Recent Results from NASA's Morphing Project

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria R.; Washburn, Anthony E.; Horta, Lucas G.; Bryant, Robert G.; Cox, David E.; Siochi, Emilie J.; Padula, Sharon L.; Holloway, Nancy M.

    2002-01-01

    The NASA Morphing Project seeks to develop and assess advanced technologies and integrated component concepts to enable efficient, multi-point adaptability in air and space vehicles. In the context of the project, the word "morphing" is defined as "efficient, multi-point adaptability" and may include macro, micro, structural and/or fluidic approaches. The project includes research on smart materials, adaptive structures, micro flow control, biomimetic concepts, optimization and controls. This paper presents an updated overview of the content of the Morphing Project including highlights of recent research results.

  12. Modeling forest bird species' likelihood of occurrence in Utah with Forest Inventory and Analysis and Landfire map products and ecologically based pseudo-absence points

    Treesearch

    Phoebe L. Zarnetske; Thomas C., Jr. Edwards; Gretchen G. Moisen

    2007-01-01

    Estimating species likelihood of occurrence across extensive landscapes is a powerful management tool. Unfortunately, available occurrence data for landscape-scale modeling is often lacking and usually only in the form of observed presences. Ecologically based pseudo-absence points were generated from within habitat envelopes to accompany presence-only data in habitat...

  13. Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations.

    PubMed

    Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L

    1997-04-01

    This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.

  14. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  15. Accounting for informatively missing data in logistic regression by means of reassessment sampling.

    PubMed

    Lin, Ji; Lyles, Robert H

    2015-05-20

    We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Planck 2013 results. XXVI. Background geometry and topology of the Universe

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Fabre, O.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; McEwen, J. D.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Peiris, H. V.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pogosyan, D.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Riazuelo, A.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rusholme, B.; Sandri, M.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    The new cosmic microwave background (CMB) temperature maps from Planck provide the highest-quality full-sky view of the surface of last scattering available to date. This allows us to detect possible departures from the standard model of a globally homogeneous and isotropic cosmology on the largest scales. We search for correlations induced by a possible non-trivial topology with a fundamental domain intersecting, or nearly intersecting, the last scattering surface (at comoving distance χrec), both via a direct search for matched circular patterns at the intersections and by an optimal likelihood search for specific topologies. For the latter we consider flat spaces with cubic toroidal (T3), equal-sided chimney (T2) and slab (T1) topologies, three multi-connected spaces of constant positive curvature (dodecahedral, truncated cube and octahedral) and two compact negative-curvature spaces. These searches yield no detection of the compact topology with the scale below the diameter of the last scattering surface. For most compact topologies studied the likelihood maximized over the orientation of the space relative to the observed map shows some preference for multi-connected models just larger than the diameter of the last scattering surface. Since this effect is also present in simulated realizations of isotropic maps, we interpret it as the inevitable alignment of mild anisotropic correlations with chance features in a single sky realization; such a feature can also be present, in milder form, when the likelihood is marginalized over orientations. Thus marginalized, the limits on the radius ℛi of the largest sphere inscribed in topological domain (at log-likelihood-ratio Δln ℒ > -5 relative to a simply-connected flat Planck best-fit model) are: in a flat Universe, ℛi> 0.92χrec for the T3 cubic torus; ℛi> 0.71χrec for the T2 chimney; ℛi> 0.50χrec for the T1 slab; and in a positively curved Universe, ℛi> 1.03χrec for the dodecahedral space; ℛi> 1.0χrec for the truncated cube; and ℛi> 0.89χrec for the octahedral space. The limit for a wider class of topologies, i.e., those predicting matching pairs of back-to-back circles, among them tori and the three spherical cases listed above, coming from the matched-circles search, is ℛi> 0.94χrec at 99% confidence level. Similar limits apply to a wide, although not exhaustive, range of topologies. We also perform a Bayesian search for an anisotropic global Bianchi VIIh geometry. In the non-physical setting where the Bianchi cosmology is decoupled from the standard cosmology, Planck data favour the inclusion of a Bianchi component with a Bayes factor of at least 1.5 units of log-evidence. Indeed, the Bianchi pattern is quite efficient at accounting for some of the large-scale anomalies found in Planck data. However, the cosmological parameters that generate this pattern are in strong disagreement with those found from CMB anisotropy data alone. In the physically motivated setting where the Bianchi parameters are coupled and fitted simultaneously with the standard cosmological parameters, we find no evidence for a Bianchi VIIh cosmology and constrain the vorticity of such models to (ω/H)0< 8.1 × 10-10 (95% confidence level).

  17. [Linkage analysis of susceptibility loci in 2 target chromosomes in pedigrees with paranoid schizophrenia and undifferentiated schizophrenia].

    PubMed

    Zeng, Li-ping; Hu, Zheng-mao; Mu, Li-li; Mei, Gui-sen; Lu, Xiu-ling; Zheng, Yong-jun; Li, Pei-jian; Zhang, Ying-xue; Pan, Qian; Long, Zhi-gao; Dai, He-ping; Zhang, Zhuo-hua; Xia, Jia-hui; Zhao, Jing-ping; Xia, Kun

    2011-06-01

    To investigate the relationship of susceptibility loci in chromosomes 1q21-25 and 6p21-25 and schizophrenia subtypes in Chinese population. A genomic scan and parametric and non-parametric analyses were performed on 242 individuals from 36 schizophrenia pedigrees, including 19 paranoid schizophrenia and 17 undifferentiated schizophrenia pedigrees, from Henan province of China using 5 microsatellite markers in the chromosome region 1q21-25 and 8 microsatellite markers in the chromosome region 6p21-25, which were the candidates of previous studies. All affected subjects were diagnosed and typed according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revised (DSM-IV-TR; American Psychiatric Association, 2000). All subjects signed informed consent. In chromosome 1, parametric analysis under the dominant inheritance mode of all 36 pedigrees showed that the maximum multi-point heterogeneity Log of odds score method (HLOD) score was 1.33 (α = 0.38). The non-parametric analysis and the single point and multi-point nonparametric linkage (NPL) scores suggested linkage at D1S484, D1S2878, and D1S196. In the 19 paranoid schizophrenias pedigrees, linkage was not observed for any of the 5 markers. In the 17 undifferentiated schizophrenia pedigrees, the multi-point NPL score was 1.60 (P= 0.0367) at D1S484. The single point NPL score was 1.95(P= 0.0145) and the multi-point NPL score was 2.39 (P= 0.0041) at D1S2878. Additionally, the multi-point NPL score was 1.74 (P= 0.0255) at D1S196. These same three loci showed suggestive linkage during the integrative analysis of all 36 pedigrees. In chromosome 6, parametric linkage analysis under the dominant and recessive inheritance and the non-parametric linkage analysis of all 36 pedigrees and the 17 undifferentiated schizophrenia pedigrees, linkage was not observed for any of the 8 markers. In the 19 paranoid schizophrenias pedigrees, parametric analysis showed that under recessive inheritance mode the maximum single-point HLOD score was 1.26 (α = 0.40) and the multi-point HLOD was 1.12 (α = 0.38) at D6S289 in the chromosome 6p23. In nonparametric analysis, the single-point NPL score was 1.52 (P= 0.0402) and the multi-point NPL score was 1.92 (P= 0.0206) at D6S289. Susceptibility genes correlated with undifferentiated schizophrenia pedigrees from D1S484, D1S2878, D1S196 loci, and those correlated with paranoid schizophrenia pedigrees from D6S289 locus are likely present in chromosome regions 1q23.3 and 1q24.2, and chromosome region 6p23, respectively.

  18. Extraction of repetitive transients with frequency domain multipoint kurtosis for bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liao, Yuhe; Sun, Peng; Wang, Baoxiang; Qu, Lei

    2018-05-01

    The appearance of repetitive transients in a vibration signal is one typical feature of faulty rolling element bearings. However, accurate extraction of these fault-related characteristic components has always been a challenging task, especially when there is interference from large amplitude impulsive noises. A frequency domain multipoint kurtosis (FDMK)-based fault diagnosis method is proposed in this paper. The multipoint kurtosis is redefined in the frequency domain and the computational accuracy is improved. An envelope autocorrelation function is also presented to estimate the fault characteristic frequency, which is used to set the frequency hunting zone of the FDMK. Then, the FDMK, instead of kurtosis, is utilized to generate a fast kurtogram and only the optimal band with maximum FDMK value is selected for envelope analysis. Negative interference from both large amplitude impulsive noise and shaft rotational speed related harmonic components are therefore greatly reduced. The analysis results of simulation and experimental data verify the capability and feasibility of this FDMK-based method

  19. Numerical Solution of Systems of Loaded Ordinary Differential Equations with Multipoint Conditions

    NASA Astrophysics Data System (ADS)

    Assanova, A. T.; Imanchiyev, A. E.; Kadirbayeva, Zh. M.

    2018-04-01

    A system of loaded ordinary differential equations with multipoint conditions is considered. The problem under study is reduced to an equivalent boundary value problem for a system of ordinary differential equations with parameters. A system of linear algebraic equations for the parameters is constructed using the matrices of the loaded terms and the multipoint condition. The conditions for the unique solvability and well-posedness of the original problem are established in terms of the matrix made up of the coefficients of the system of linear algebraic equations. The coefficients and the righthand side of the constructed system are determined by solving Cauchy problems for linear ordinary differential equations. The solutions of the system are found in terms of the values of the desired function at the initial points of subintervals. The parametrization method is numerically implemented using the fourth-order accurate Runge-Kutta method as applied to the Cauchy problems for ordinary differential equations. The performance of the constructed numerical algorithms is illustrated by examples.

  20. Uncountably many maximizing measures for a dense subset of continuous functions

    NASA Astrophysics Data System (ADS)

    Shinoda, Mao

    2018-05-01

    Ergodic optimization aims to single out dynamically invariant Borel probability measures which maximize the integral of a given ‘performance’ function. For a continuous self-map of a compact metric space and a dense set of continuous functions, we show the existence of uncountably many ergodic maximizing measures. We also show that, for a topologically mixing subshift of finite type and a dense set of continuous functions there exist uncountably many ergodic maximizing measures with full support and positive entropy.

  1. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  2. Planning, Execution, and Assessment of Effects-Based Operations (EBO)

    DTIC Science & Technology

    2006-05-01

    time of execution that would maximize the likelihood of achieving a desired effect. GMU has developed a methodology, named ECAD -EA (Effective...Algorithm EBO Effects Based Operations ECAD -EA Effective Course of Action-Evolutionary Algorithm GMU George Mason University GUI Graphical...Probability Profile Generation ........................................................72 A.2.11 Running ECAD -EA (Effective Courses of Action Determination

  3. Differences in seasonal variation between two biotypes of Megamelus scutellaris (Hemiptera: Delphacidae), a biological control agent for Eichhornia crassipes in Florida.

    USDA-ARS?s Scientific Manuscript database

    Climate matching between the native and adventive ranges of insects used for biological control is a generally accepted strategy for both increasing the likelihood of establishing an agent, as well as improving its overall performance, thereby maximizing the potential utility of an agent across the...

  4. The Effect of Missing Data Handling Methods on Goodness of Fit Indices in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Köse, Alper

    2014-01-01

    The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…

  5. Policy Implications Analysis: A Methodological Advancement for Policy Research and Evaluation.

    ERIC Educational Resources Information Center

    Madey, Doren L.; Stenner, A. Jackson

    Policy Implications Analysis (PIA) is a tool designed to maximize the likelihood that an evaluation report will have an impact on decision-making. PIA was designed to help people planning and conducting evaluations tailor their information so that it has optimal potential for being used and acted upon. This paper describes the development and…

  6. In-line multipoint near-infrared spectroscopy for moisture content quantification during freeze-drying.

    PubMed

    Kauppinen, Ari; Toiviainen, Maunu; Korhonen, Ossi; Aaltonen, Jaakko; Järvinen, Kristiina; Paaso, Janne; Juuti, Mikko; Ketolainen, Jarkko

    2013-02-19

    During the past decade, near-infrared (NIR) spectroscopy has been applied for in-line moisture content quantification during a freeze-drying process. However, NIR has been used as a single-vial technique and thus is not representative of the entire batch. This has been considered as one of the main barriers for NIR spectroscopy becoming widely used in process analytical technology (PAT) for freeze-drying. Clearly it would be essential to monitor samples that reliably represent the whole batch. The present study evaluated multipoint NIR spectroscopy for in-line moisture content quantification during a freeze-drying process. Aqueous sucrose solutions were used as model formulations. NIR data was calibrated to predict the moisture content using partial least-squares (PLS) regression with Karl Fischer titration being used as a reference method. PLS calibrations resulted in root-mean-square error of prediction (RMSEP) values lower than 0.13%. Three noncontact, diffuse reflectance NIR probe heads were positioned on the freeze-dryer shelf to measure the moisture content in a noninvasive manner, through the side of the glass vials. The results showed that the detection of unequal sublimation rates within a freeze-dryer shelf was possible with the multipoint NIR system in use. Furthermore, in-line moisture content quantification was reliable especially toward the end of the process. These findings indicate that the use of multipoint NIR spectroscopy can achieve representative quantification of moisture content and hence a drying end point determination to a desired residual moisture level.

  7. New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data

    NASA Astrophysics Data System (ADS)

    Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.

    2009-11-01

    A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.

  8. Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps

    NASA Astrophysics Data System (ADS)

    Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine

    2015-08-01

    We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.

  9. Fine Mapping of QUICK ROOTING 1 and 2, Quantitative Trait Loci Increasing Root Length in Rice.

    PubMed

    Kitomi, Yuka; Nakao, Emari; Kawai, Sawako; Kanno, Noriko; Ando, Tsuyu; Fukuoka, Shuichi; Irie, Kenji; Uga, Yusaku

    2018-02-02

    The volume that the root system can occupy is associated with the efficiency of water and nutrient uptake from soil. Genetic improvement of root length, which is a limiting factor for root distribution, is necessary for increasing crop production. In this report, we describe identification of two quantitative trait loci (QTLs) for maximal root length, QUICK ROOTING 1 ( QRO1 ) on chromosome 2 and QRO2 on chromosome 6, in cultivated rice ( Oryza sativa L.). We measured the maximal root length in 26 lines carrying chromosome segments from the long-rooted upland rice cultivar Kinandang Patong in the genetic background of the short-rooted lowland cultivar IR64. Five lines had longer roots than IR64. By rough mapping of the target regions in BC 4 F 2 populations, we detected putative QTLs for maximal root length on chromosomes 2, 6, and 8. To fine-map these QTLs, we used BC 4 F 3 recombinant homozygous lines. QRO1 was mapped between markers RM5651 and RM6107, which delimit a 1.7-Mb interval on chromosome 2, and QRO2 was mapped between markers RM20495 and RM3430-1, which delimit an 884-kb interval on chromosome 6. Both QTLs may be promising gene resources for improving root system architecture in rice. Copyright © 2018 Kitomi et al.

  10. A quasi-likelihood approach to non-negative matrix factorization

    PubMed Central

    Devarajan, Karthik; Cheung, Vincent C.K.

    2017-01-01

    A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511

  11. Speech processing using maximum likelihood continuity mapping

    DOEpatents

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  12. Speech processing using maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.E.

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  13. Multipoint Green's functions in 1 + 1 dimensional integrable quantum field theories

    DOE PAGES

    Babujian, H. M.; Karowski, M.; Tsvelik, A. M.

    2017-02-14

    We calculate the multipoint Green functions in 1+1 dimensional integrable quantum field theories. We use the crossing formula for general models and calculate the 3 and 4 point functions taking in to account only the lower nontrivial intermediate states contributions. Then we apply the general results to the examples of the scaling Z 2 Ising model, sinh-Gordon model and Z 3 scaling Potts model. We demonstrate this calculations explicitly. The results can be applied to physical phenomena as for example to the Raman scattering.

  14. Satellite economics in the 1980's

    NASA Astrophysics Data System (ADS)

    Morgan, W. L.

    1980-01-01

    Satellite traffic, competition, and decreasing costs are discussed, as are capabilities in telecommunication (including entertainment) and computation. Also considered are future teleconferencing and telecommuting to offset the cost of transportation, the establishment of a manufacturer-to-user link for increased home minicomputer capability, and an increase of digital over analog traffic. It is suggested that transcontinental bulk traffic, high-speed data, and multipoint private networks will eventually be handled by satellites which are cost-insensitive to distance, readily match dynamically varying multipoint networks, and have uniformly wide bandwidths available to both major cities and isolated towns.

  15. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  16. Comparing Forest/Nonforest Classifications of Landsat TM Imagery for Stratifying FIA Estimates of Forest Land Area

    Treesearch

    Mark D. Nelson; Ronald E. McRoberts; Greg C. Liknes; Geoffrey R. Holden

    2005-01-01

    Landsat Thematic Mapper (TM) satellite imagery and Forest Inventory and Analysis (FIA) plot data were used to construct forest/nonforest maps of Mapping Zone 41, National Land Cover Dataset 2000 (NLCD 2000). Stratification approaches resulting from Maximum Likelihood, Fuzzy Convolution, Logistic Regression, and k-Nearest Neighbors classification/prediction methods were...

  17. GRO/EGRET data analysis software: An integrated system of custom and commercial software using standard interfaces

    NASA Technical Reports Server (NTRS)

    Laubenthal, N. A.; Bertsch, D.; Lal, N.; Etienne, A.; Mcdonald, L.; Mattox, J.; Sreekumar, P.; Nolan, P.; Fierro, J.

    1992-01-01

    The Energetic Gamma Ray Telescope Experiment (EGRET) on the Compton Gamma Ray Observatory has been in orbit for more than a year and is being used to map the full sky for gamma rays in a wide energy range from 30 to 20,000 MeV. Already these measurements have resulted in a wide range of exciting new information on quasars, pulsars, galactic sources, and diffuse gamma ray emission. The central part of the analysis is done with sky maps that typically cover an 80 x 80 degree section of the sky for an exposure time of several days. Specific software developed for this program generates the counts, exposure, and intensity maps. The analysis is done on a network of UNIX based workstations and takes full advantage of a custom-built user interface called X-dialog. The maps that are generated are stored in the FITS format for a collection of energies. These, along with similar diffuse emission background maps generated from a model calculation, serve as input to a maximum likelihood program that produces maps of likelihood with optional contours that are used to evaluate regions for sources. Likelihood also evaluates the background corrected intensity at each location for each energy interval from which spectra can be generated. Being in a standard FITS format permits all of the maps to be easily accessed by the full complement of tools available in several commercial astronomical analysis systems. In the EGRET case, IDL is used to produce graphics plots in two and three dimensions and to quickly implement any special evaluation that might be desired. Other custom-built software, such as the spectral and pulsar analyses, take advantage of the XView toolkit for display and Postscript output for the color hard copy. This poster paper outlines the data flow and provides examples of the user interfaces and output products. It stresses the advantages that are derived from the integration of the specific instrument-unique software and powerful commercial tools for graphics and statistical evaluation. This approach has several proven advantages including flexibility, a minimum of development effort, ease of use, and portability.

  18. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  20. State-Space Modeling of Dynamic Psychological Processes via the Kalman Smoother Algorithm: Rationale, Finite Sample Properties, and Applications

    ERIC Educational Resources Information Center

    Song, Hairong; Ferrer, Emilio

    2009-01-01

    This article presents a state-space modeling (SSM) technique for fitting process factor analysis models directly to raw data. The Kalman smoother via the expectation-maximization algorithm to obtain maximum likelihood parameter estimates is used. To examine the finite sample properties of the estimates in SSM when common factors are involved, a…

  1. Multipoint Observations of Energetic Particle Injections and Substorm Activity During a Conjunction Between Magnetospheric Multiscale (MMS) and Van Allen Probes

    NASA Astrophysics Data System (ADS)

    Turner, D. L.; Fennell, J. F.; Blake, J. B.; Claudepierre, S. G.; Clemmons, J. H.; Jaynes, A. N.; Leonard, T.; Baker, D. N.; Cohen, I. J.; Gkioulidou, M.; Ukhorskiy, A. Y.; Mauk, B. H.; Gabrielse, C.; Angelopoulos, V.; Strangeway, R. J.; Kletzing, C. A.; Le Contel, O.; Spence, H. E.; Torbert, R. B.; Burch, J. L.; Reeves, G. D.

    2017-11-01

    This study examines multipoint observations during a conjunction between Magnetospheric Multiscale (MMS) and Van Allen Probes on 7 April 2016 in which a series of energetic particle injections occurred. With complementary data from Time History of Events and Macroscale Interactions during Substorms, Geotail, and Los Alamos National Laboratory spacecraft in geosynchronous orbit (16 spacecraft in total), we develop new insights on the nature of energetic particle injections associated with substorm activity. Despite this case involving only weak substorm activity (maximum AE <300 nT) during quiet geomagnetic conditions in steady, below-average solar wind, a complex series of at least six different electron injections was observed throughout the system. Intriguingly, only one corresponding ion injection was clearly observed. All ion and electron injections were observed at <600 keV only. MMS reveals detailed substructure within the largest electron injection. A relationship between injected electrons with energy <60 keV and enhanced whistler mode chorus wave activity is also established from Van Allen Probes and MMS. Drift mapping using a simplified magnetic field model provides estimates of the dispersionless injection boundary locations as a function of universal time, magnetic local time, and L shell. The analysis reveals that at least five electron injections, which were localized in magnetic local time, preceded a larger injection of both electrons and ions across nearly the entire nightside of the magnetosphere near geosynchronous orbit. The larger ion and electron injection did not penetrate to L < 6.6, but several of the smaller electron injections penetrated to L < 6.6. Due to the discrepancy between the number, penetration depth, and complexity of electron versus ion injections, this event presents challenges to the current conceptual models of energetic particle injections.

  2. Multipoint Observations of Energetic Particle Injections and Substorm Activity During a Conjunction Between Magnetospheric Multiscale (MMS) and Van Allen Probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Drew L.; Fennell, J. F.; Blake, J. B.

    Here, this study examines multipoint observations during a conjunction between Magnetospheric Multiscale (MMS) and Van Allen Probes on 7 April 2016 in which a series of energetic particle injections occurred. With complementary data from Time History of Events and Macroscale Interactions during Substorms, Geotail, and Los Alamos National Laboratory spacecraft in geosynchronous orbit (16 spacecraft in total), we develop new insights on the nature of energetic particle injections associated with substorm activity. Despite this case involving only weak substorm activity (maximum AE <300 nT) during quiet geomagnetic conditions in steady, below-average solar wind, a complex series of at least sixmore » different electron injections was observed throughout the system. Intriguingly, only one corresponding ion injection was clearly observed. All ion and electron injections were observed at <600 keV only. MMS reveals detailed substructure within the largest electron injection. A relationship between injected electrons with energy <60 keV and enhanced whistler mode chorus wave activity is also established from Van Allen Probes and MMS. Drift mapping using a simplified magnetic field model provides estimates of the dispersionless injection boundary locations as a function of universal time, magnetic local time, and L shell. The analysis reveals that at least five electron injections, which were localized in magnetic local time, preceded a larger injection of both electrons and ions across nearly the entire nightside of the magnetosphere near geosynchronous orbit. The larger ion and electron injection did not penetrate to L < 6.6, but several of the smaller electron injections penetrated to L < 6.6. Due to the discrepancy between the number, penetration depth, and complexity of electron versus ion injections, this event presents challenges to the current conceptual models of energetic particle injections.« less

  3. Multipoint Observations of Energetic Particle Injections and Substorm Activity During a Conjunction Between Magnetospheric Multiscale (MMS) and Van Allen Probes

    DOE PAGES

    Turner, Drew L.; Fennell, J. F.; Blake, J. B.; ...

    2017-09-25

    Here, this study examines multipoint observations during a conjunction between Magnetospheric Multiscale (MMS) and Van Allen Probes on 7 April 2016 in which a series of energetic particle injections occurred. With complementary data from Time History of Events and Macroscale Interactions during Substorms, Geotail, and Los Alamos National Laboratory spacecraft in geosynchronous orbit (16 spacecraft in total), we develop new insights on the nature of energetic particle injections associated with substorm activity. Despite this case involving only weak substorm activity (maximum AE <300 nT) during quiet geomagnetic conditions in steady, below-average solar wind, a complex series of at least sixmore » different electron injections was observed throughout the system. Intriguingly, only one corresponding ion injection was clearly observed. All ion and electron injections were observed at <600 keV only. MMS reveals detailed substructure within the largest electron injection. A relationship between injected electrons with energy <60 keV and enhanced whistler mode chorus wave activity is also established from Van Allen Probes and MMS. Drift mapping using a simplified magnetic field model provides estimates of the dispersionless injection boundary locations as a function of universal time, magnetic local time, and L shell. The analysis reveals that at least five electron injections, which were localized in magnetic local time, preceded a larger injection of both electrons and ions across nearly the entire nightside of the magnetosphere near geosynchronous orbit. The larger ion and electron injection did not penetrate to L < 6.6, but several of the smaller electron injections penetrated to L < 6.6. Due to the discrepancy between the number, penetration depth, and complexity of electron versus ion injections, this event presents challenges to the current conceptual models of energetic particle injections.« less

  4. Genomewide Linkage Scan for Split–Hand/Foot Malformation with Long-Bone Deficiency in a Large Arab Family Identifies Two Novel Susceptibility Loci on Chromosomes 1q42.2-q43 and 6q14.1

    PubMed Central

    Naveed, Mohammed; Nath, Swapan K.; Gaines, Mathew; Al-Ali, Mahmoud T.; Al-Khaja, Najib; Hutchings, David; Golla, Jeffrey; Deutsch, Samuel; Bottani, Armand; Antonarakis, Stylianos E.; Ratnamala, Uppala; Radhakrishna, Uppala

    2007-01-01

    Split–hand/foot malformation with long-bone deficiency (SHFLD) is a rare, severe limb deformity characterized by tibia aplasia with or without split-hand/split-foot deformity. Identification of genetic susceptibility loci for SHFLD has been unsuccessful because of its rare incidence, variable phenotypic expression and associated anomalies, and uncertain inheritance pattern. SHFLD is usually inherited as an autosomal dominant trait with reduced penetrance, although recessive inheritance has also been postulated. We conducted a genomewide linkage analysis, using a 10K SNP array in a large consanguineous family (UR078) from the United Arab Emirates (UAE) who had disease transmission consistent with an autosomal dominant inheritance pattern. The study identified two novel SHFLD susceptibility loci at 1q42.2-q43 (nonparametric linkage [NPL] 9.8, P=.000065) and 6q14.1 (NPL 7.12, P=.000897). These results were also supported by multipoint parametric linkage analysis. Maximum multipoint LOD scores of 3.20 and 3.78 were detected for genomic locations 1q42.2-43 and 6q14.1, respectively, with the use of an autosomal dominant mode of inheritance with reduced penetrance. Haplotype analysis with informative crossovers enabled mapping of the SHFLD loci to a region of ∼18.38 cM (8.4 Mb) between single-nucleotide polymorphisms rs1124110 and rs535043 on 1q42.2-q43 and to a region of ∼1.96 cM (4.1 Mb) between rs623155 and rs1547251 on 6q14.1. The study identified two novel loci for the SHFLD phenotype in this UAE family. PMID:17160898

  5. Attenuation correction strategies for multi-energy photon emitters using SPECT

    NASA Astrophysics Data System (ADS)

    Pretorius, P. H.; King, M. A.; Pan, T.-S.; Hutton, B. F.

    1997-06-01

    The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation maximization (ML-OS) reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: 1) the 93 keV attenuation map for attenuation correction, 2) the 185 keV attenuation map for attenuation correction, 3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and 4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCRs of sphere 4 (in proximity to the liver, spleen and backbone) were under-estimated, although TCRs were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately. They are recommended for multi-energy photon SPECT imaging quantitation when there is a need to combine the acquisitions of multiple windows.

  6. Multi-Point Combustion System: Final Report

    NASA Technical Reports Server (NTRS)

    Goeke, Jerry; Pack, Spencer; Zink, Gregory; Ryon, Jason

    2014-01-01

    A low-NOx emission combustor concept has been developed for NASA's Environmentally Responsible Aircraft (ERA) program to meet N+2 emissions goals for a 70,000 lb thrust engine application. These goals include 75 percent reduction of LTO NOx from CAEP6 standards without increasing CO, UHC, or smoke from that of current state of the art. An additional key factor in this work is to improve lean combustion stability over that of previous work performed on similar technology in the early 2000s. The purpose of this paper is to present the final report for the NASA contract. This work included the design, analysis, and test of a multi-point combustion system. All design work was based on the results of Computational Fluid Dynamics modeling with the end results tested on a medium pressure combustion rig at the UC and a medium pressure combustion rig at GRC. The theories behind the designs, results of analysis, and experimental test data will be discussed in this report. The combustion system consists of five radially staged rows of injectors, where ten small scale injectors are used in place of a single traditional nozzle. Major accomplishments of the current work include the design of a Multipoint Lean Direct Injection (MLDI) array and associated air blast and pilot fuel injectors, which is expected to meet or exceed the goal of a 75 percent reduction in LTO NOx from CAEP6 standards. This design incorporates a reduced number of injectors over previous multipoint designs, simplified and lightweight components, and a very compact combustor section. Additional outcomes of the program are validation that the design of these combustion systems can be aided by the use of Computational Fluid Dynamics to predict and reduce emissions. Furthermore, the staging of fuel through the individually controlled radially staged injector rows successfully demonstrated improved low power operability as well as improvements in emissions over previous multipoint designs. Additional comparison between Jet- A fuel and a hydrotreated biofuel is made to determine viability of the technology for use with alternative fuels. Finally, the operability of the array and associated nozzles proved to be very stable without requiring additional active or passive control systems. A number of publications have been publish

  7. Minimization for conditional simulation: Relationship to optimal transport

    NASA Astrophysics Data System (ADS)

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  8. PepMapper: a collaborative web tool for mapping epitopes from affinity-selected peptides.

    PubMed

    Chen, Wenhan; Guo, William W; Huang, Yanxin; Ma, Zhiqiang

    2012-01-01

    Epitope mapping from affinity-selected peptides has become popular in epitope prediction, and correspondingly many Web-based tools have been developed in recent years. However, the performance of these tools varies in different circumstances. To address this problem, we employed an ensemble approach to incorporate two popular Web tools, MimoPro and Pep-3D-Search, together for taking advantages offered by both methods so as to give users more options for their specific purposes of epitope-peptide mapping. The combined operation of Union finds as many associated peptides as possible from both methods, which increases sensitivity in finding potential epitopic regions on a given antigen surface. The combined operation of Intersection achieves to some extent the mutual verification by the two methods and hence increases the likelihood of locating the genuine epitopic region on a given antigen in relation to the interacting peptides. The Consistency between Intersection and Union is an indirect sufficient condition to assess the likelihood of successful peptide-epitope mapping. On average from 27 tests, the combined operations of PepMapper outperformed either MimoPro or Pep-3D-Search alone. Therefore, PepMapper is another multipurpose mapping tool for epitope prediction from affinity-selected peptides. The Web server can be freely accessed at: http://informatics.nenu.edu.cn/PepMapper/

  9. Comparison of clinical outcomes of multi-point umbrella suturing and single purse suturing with two-point traction after procedure for prolapse and hemorrhoids (PPH) surgery.

    PubMed

    Jiang, Huiyong; Hao, Xiuyan; Xin, Ying; Pan, Youzhen

    2017-11-01

    To compare the clinical outcomes of multipoint umbrella suture and single-purse suture with two-point traction after procedure for prolapse and hemorrhoids surgery (PPH) for the treatment of mixed hemorrhoids. Ninety patients were randomly divided into a PPH plus single-purse suture group (Group A) and a PPH plus multipoint umbrella suture (Group B). All operations were performed by an experienced surgeon. Operation time, width of the specimen, hemorrhoids retraction extent, postoperative pain, postoperative bleeding, and length of hospitalization were recorded and compared. Statistical analysis was conducted by t-test and χ2 test. There were no significant differences in sex, age, course of disease, and degree of prolapse of hemorrhoids between the two groups. The operative time in Group A was significantly shorter than that in Group B (P < 0.05). However, the incidence rates of submucosal hematoma and incomplete hemorrhoid core retraction were significantly lower in Group B (P < 0.05), whereas the width of the specimens in Group B was greater than that in Group A (P < 0.05). There were fewer redundant skin tags in Group B at three months follow-up. No significant difference in postoperative pain, postoperative bleeding, and time of hospital stay (P > 0.05 for all comparisons) was observed. The multipoint umbrella suture showed better clinical outcomes because of its targeted suture according to the extent of hemorrhoid prolapse. Copyright © 2017. Published by Elsevier Ltd.

  10. Rapid and accurate peripheral nerve detection using multipoint Raman imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kumamoto, Yasuaki; Minamikawa, Takeo; Kawamura, Akinori; Matsumura, Junichi; Tsuda, Yuichiro; Ukon, Juichiro; Harada, Yoshinori; Tanaka, Hideo; Takamatsu, Tetsuro

    2017-02-01

    Nerve-sparing surgery is essential to avoid functional deficits of the limbs and organs. Raman scattering, a label-free, minimally invasive, and accurate modality, is one of the best candidate technologies to detect nerves for nerve-sparing surgery. However, Raman scattering imaging is too time-consuming to be employed in surgery. Here we present a rapid and accurate nerve visualization method using a multipoint Raman imaging technique that has enabled simultaneous spectra measurement from different locations (n=32) of a sample. Five sec is sufficient for measuring n=32 spectra with good S/N from a given tissue. Principal component regression discriminant analysis discriminated spectra obtained from peripheral nerves (n=863 from n=161 myelinated nerves) and connective tissue (n=828 from n=121 tendons) with sensitivity and specificity of 88.3% and 94.8%, respectively. To compensate the spatial information of a multipoint-Raman-derived tissue discrimination image that is too sparse to visualize nerve arrangement, we used morphological information obtained from a bright-field image. When merged with the sparse tissue discrimination image, a morphological image of a sample shows what portion of Raman measurement points in arbitrary structure is determined as nerve. Setting a nerve detection criterion on the portion of "nerve" points in the structure as 40% or more, myelinated nerves (n=161) and tendons (n=121) were discriminated with sensitivity and specificity of 97.5%. The presented technique utilizing a sparse multipoint Raman image and a bright-field image has enabled rapid, safe, and accurate detection of peripheral nerves.

  11. Landslide susceptibility mapping for a landslide-prone area (Findikli, NE of Turkey) by likelihood-frequency ratio and weighted linear combination models

    NASA Astrophysics Data System (ADS)

    Akgun, Aykut; Dag, Serhat; Bulut, Fikri

    2008-05-01

    Landslides are very common natural problems in the Black Sea Region of Turkey due to the steep topography, improper use of land cover and adverse climatic conditions for landslides. In the western part of region, many studies have been carried out especially in the last decade for landslide susceptibility mapping using different evaluation methods such as deterministic approach, landslide distribution, qualitative, statistical and distribution-free analyses. The purpose of this study is to produce landslide susceptibility maps of a landslide-prone area (Findikli district, Rize) located at the eastern part of the Black Sea Region of Turkey by likelihood frequency ratio (LRM) model and weighted linear combination (WLC) model and to compare the results obtained. For this purpose, landslide inventory map of the area were prepared for the years of 1983 and 1995 by detailed field surveys and aerial-photography studies. Slope angle, slope aspect, lithology, distance from drainage lines, distance from roads and the land-cover of the study area are considered as the landslide-conditioning parameters. The differences between the susceptibility maps derived by the LRM and the WLC models are relatively minor when broad-based classifications are taken into account. However, the WLC map showed more details but the other map produced by LRM model produced weak results. The reason for this result is considered to be the fact that the majority of pixels in the LRM map have high values than the WLC-derived susceptibility map. In order to validate the two susceptibility maps, both of them were compared with the landslide inventory map. Although the landslides do not exist in the very high susceptibility class of the both maps, 79% of the landslides fall into the high and very high susceptibility zones of the WLC map while this is 49% for the LRM map. This shows that the WLC model exhibited higher performance than the LRM model.

  12. X-linked infantile spinal muscular atrophy: clinical definition and molecular mapping.

    PubMed

    Dressman, Devin; Ahearn, Mary Ellen; Yariz, Kemal O; Basterrecha, Hugo; Martínez, Francisco; Palau, Francesc; Barmada, M Michael; Clark, Robin Dawn; Meindl, Alfons; Wirth, Brunhilde; Hoffman, Eric P; Baumbach-Reardon, Lisa

    2007-01-01

    X-linked infantile spinal-muscular atrophy (XL-SMA) is a rare disorder, which presents with the clinical characteristics of hypotonia, areflexia, and multiple congenital contractures (arthrogryposis) associated with loss of anterior horn cells and death in infancy. We have previously reported a single family with XL-SMA that mapped to Xp11.3-q11.2. Here we report further clinical description of XL-SMA plus an additional seven unrelated (XL-SMA) families from North America and Europe that show linkage data consistent with the same region. We first investigated linkage to the candidate disease gene region using microsatellite repeat markers. We further saturated the candidate disease gene region using polymorphic microsatellite repeat markers and single nucleotide polymorphisms in an effort to narrow the critical region. Two-point and multipoint linkage analysis was performed using the Allegro software package. Linkage analysis of all XL-SMA families displayed linkage consistent with the original XL-SMA region. The addition of new families and new markers has narrowed the disease gene interval for a XL-SMA locus between SNP FLJ22843 near marker DXS 8080 and SNP ARHGEF9 which is near DXS7132 (Xp11.3-Xq11.1).

  13. Multibaseline gravitational wave radiometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talukder, Dipongkar; Bose, Sukanta; Mitra, Sanjit

    2011-03-15

    We present a statistic for the detection of stochastic gravitational wave backgrounds (SGWBs) using radiometry with a network of multiple baselines. We also quantitatively compare the sensitivities of existing baselines and their network to SGWBs. We assess how the measurement accuracy of signal parameters, e.g., the sky position of a localized source, can improve when using a network of baselines, as compared to any of the single participating baselines. The search statistic itself is derived from the likelihood ratio of the cross correlation of the data across all possible baselines in a detector network and is optimal in Gaussian noise.more » Specifically, it is the likelihood ratio maximized over the strength of the SGWB and is called the maximized-likelihood ratio (MLR). One of the main advantages of using the MLR over past search strategies for inferring the presence or absence of a signal is that the former does not require the deconvolution of the cross correlation statistic. Therefore, it does not suffer from errors inherent to the deconvolution procedure and is especially useful for detecting weak sources. In the limit of a single baseline, it reduces to the detection statistic studied by Ballmer [Classical Quantum Gravity 23, S179 (2006).] and Mitra et al.[Phys. Rev. D 77, 042002 (2008).]. Unlike past studies, here the MLR statistic enables us to compare quantitatively the performances of a variety of baselines searching for a SGWB signal in (simulated) data. Although we use simulated noise and SGWB signals for making these comparisons, our method can be straightforwardly applied on real data.« less

  14. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  15. Multipoint molecular recognition within a calix[6]arene funnel complex

    PubMed Central

    Coquière, David; de la Lande, Aurélien; Martí, Sergio; Parisel, Olivier; Prangé, Thierry; Reinaud, Olivia

    2009-01-01

    A multipoint recognition system based on a calix[6]arene is described. The calixarene core is decorated on alternating aromatic subunits by 3 imidazole arms at the small rim and 3 aniline groups at the large rim. This substitution pattern projects the aniline nitrogens toward each other when Zn(II) binds at the Tris-imidazole site or when a proton binds at an aniline. The XRD structure of the monoprotonated complex having an acetonitrile molecule bound to Zn(II) in the cavity revealed a constrained geometry at the metal center reminiscent of an entatic state. Computer modeling suggests that the aniline groups behave as a tritopic monobasic site in which only 1 aniline unit is protonated and interacts with the other 2 through strong hydrogen bonding. The metal complex selectively binds a monoprotonated diamine vs. a monoamine through multipoint recognition: coordination to the metal ion at the small rim, hydrogen bonding to the calix-oxygen core, CH/π interaction within the cavity's aromatic walls, and H-bonding to the anilines at the large rim. PMID:19237564

  16. Multi-point estimation of total energy expenditure: a comparison between zinc-reduction and platinum-equilibration methodologies.

    PubMed

    Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V

    2003-12-15

    Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.

  17. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation

    NASA Astrophysics Data System (ADS)

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  18. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation.

    PubMed

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  19. Multi-point Adjoint-Based Design of Tilt-Rotors in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Acree, Cecil W.

    2014-01-01

    Optimization of tilt-rotor systems requires the consideration of performance at multiple design points. In the current study, an adjoint-based optimization of a tilt-rotor blade is considered. The optimization seeks to simultaneously maximize the rotorcraft figure of merit in hover and the propulsive efficiency in airplane-mode for a tilt-rotor system. The design is subject to minimum thrust constraints imposed at each design point. The rotor flowfields at each design point are cast as steady-state problems in a noninertial reference frame. Geometric design variables used in the study to control blade shape include: thickness, camber, twist, and taper represented by as many as 123 separate design variables. Performance weighting of each operational mode is considered in the formulation of the composite objective function, and a build up of increasing geometric degrees of freedom is used to isolate the impact of selected design variables. In all cases considered, the resulting designs successfully increase both the hover figure of merit and the airplane-mode propulsive efficiency for a rotor designed with classical techniques.

  20. X-linked mental retardation with thin habitus, osteoporosis, and kyphoscoliosis: Linkage to Xp21.3-p22.12

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arena, J.F.; Lubs, H.; Schwartz, C.

    We reevaluated a family previously described as having nonspecific X-linked mental retardation (XLMR) by Snyder and Robinson (MINI 309583). Clinical and DNA studies were conducted on 17 relatives, including 6 males with mild-to-moderate mental retardation, 3 carrier females, and 8 normal males. In contrast to the normal appearance and minimal clinical findings reported 22 years ago, affected males were found to have a characteristic set of clinical findings. These developed gradually over the first 2 decades, and included thin body build with diminished muscle mass, osteoporosis and kyphoscoliosis, slight facial asymmetry with a prominent lower lip, nasal speech, high narrowmore » or cleft palate, and long great toes. Carrier females were clinically normal. Multipoint linkage analysis indicated linkage to markers distal to the 3{prime} end of DMD (DXS41 and DXS989), with a maximal lod score of 4.7. On the basis of these findings, this entity is redefined as XLMR syndrome. 22 refs., 6 figs., 2 tabs.« less

  1. Implications of climate change on the distribution of the tick vector Ixodes scapularis and risk for Lyme disease in the Texas-Mexico transboundary region

    USDA-ARS?s Scientific Manuscript database

    Disease risk maps are important tools that help ascertain the likelihood of exposure to specific infectious agents. Understanding how climate change may affect the suitability of habitats for ticks will improve the accuracy of risk maps of tick-borne pathogen transmission in humans and domestic anim...

  2. An Activation Likelihood Estimation Meta-Analysis Study of Simple Motor Movements in Older and Young Adults

    PubMed Central

    Turesky, Ted K.; Turkeltaub, Peter E.; Eden, Guinevere F.

    2016-01-01

    The functional neuroanatomy of finger movements has been characterized with neuroimaging in young adults. However, less is known about the aging motor system. Several studies have contrasted movement-related activity in older versus young adults, but there is inconsistency among their findings. To address this, we conducted an activation likelihood estimation (ALE) meta-analysis on within-group data from older adults and young adults performing regularly paced right-hand finger movement tasks in response to external stimuli. We hypothesized that older adults would show a greater likelihood of activation in right cortical motor areas (i.e., ipsilateral to the side of movement) compared to young adults. ALE maps were examined for conjunction and between-group differences. Older adults showed overlapping likelihoods of activation with young adults in left primary sensorimotor cortex (SM1), bilateral supplementary motor area, bilateral insula, left thalamus, and right anterior cerebellum. Their ALE map differed from that of the young adults in right SM1 (extending into dorsal premotor cortex), right supramarginal gyrus, medial premotor cortex, and right posterior cerebellum. The finding that older adults uniquely use ipsilateral regions for right-hand finger movements and show age-dependent modulations in regions recruited by both age groups provides a foundation by which to understand age-related motor decline and motor disorders. PMID:27799910

  3. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

    PubMed Central

    Meyer, Karin

    2016-01-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681

  4. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  5. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  6. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  7. Experimental investigation of the Multipoint Ultrasonic Flowmeter

    NASA Astrophysics Data System (ADS)

    Jakub, Filipský

    2018-06-01

    The Multipoint Ultrasonic Flowmeter is a vector tomographic device capable of reconstructing all three components of velocity field based solely on boundary ultrasonic measurements. Computer simulations have shown the feasibility of such a device and have been published previously. This paper describes an experimental investigation of achievable accuracy of such a method. Doubled acoustic tripoles used to obtain information of the solenoidal part of vector field show extremely short differences between the Time Of Flights (TOFs) of individual sensors and are therefore sensitive to parasitic effects of TOF measurements. Sampling at 40MHz and correlation method is used to measure the TOF.

  8. Development and accuracy of a multipoint method for measuring visibility.

    PubMed

    Tai, Hongda; Zhuang, Zibo; Sun, Dongsong

    2017-10-01

    Accurate measurements of visibility are of great importance in many fields. This paper reports a multipoint visibility measurement (MVM) method to measure and calculate the atmospheric transmittance, extinction coefficient, and meteorological optical range (MOR). The relative errors of atmospheric transmittance and MOR measured by the MVM method and traditional transmissometer method are analyzed and compared. Experiments were conducted indoors, and the data were simultaneously processed. The results revealed that the MVM can effectively improve the accuracy under different visibility conditions. The greatest improvement of accuracy was 27%. The MVM can be used to calibrate and evaluate visibility meters.

  9. Two-way digital communications

    NASA Astrophysics Data System (ADS)

    Glenn, William E.; Daly, Ed

    1996-03-01

    The communications industry has been rapidly converting from analog to digital communications for audio, video, and data. The initial applications have been concentrating on point-to-multipoint transmission. Currently, a new revolution is occurring in which two-way point-to-point transmission is a rapidly growing market. The system designs for video compression developed for point-to-multipoint transmission are unsuitable for this new market as well as for satellite based video encoding. A new system developed by the Space Communications Technology Center has been designed to address both of these newer applications. An update on the system performance and design will be given.

  10. On the Interconnection of Incompatible Solid Finite Element Meshes Using Multipoint Constraints

    NASA Technical Reports Server (NTRS)

    Fox, G. L.

    1985-01-01

    Incompatible meshes, i.e., meshes that physically must have a common boundary, but do not necessarily have coincident grid points, can arise in the course of a finite element analysis. For example, two substructures may have been developed at different times for different purposes and it becomes necessary to interconnect the two models. A technique that uses only multipoint constraints, i.e., MPC cards (or MPCS cards in substructuring), is presented. Since the method uses only MPC's, the procedure may apply at any stage in an analysis; no prior planning or special data is necessary.

  11. Multipoint attachment to a support protects enzyme from inactivation by organic solvents: alpha-Chymotrypsin in aqueous solutions of alcohols and diols.

    PubMed

    Mozhaev, V V; Sergeeva, M V; Belova, A B; Khmelnitsky, Y L

    1990-03-25

    Inactivation of alpha-chymotrypsin in aqueous solutions of alcohols and diols proceeds both reversibly and irreversibly. Reversible loss of the specific enzyme activity results from conformational changes (unfolding) of the enzyme detected by fluorescence spectroscopy. Multipoint covalent attachment to the matrix of polyacryl-amide gel by copolymerization method stabilizes alpha-chymotrypsin from denaturation by alcohols, the stabilizing effect increasing with the number of bonds between the protein and the support. Immobilization protects the enzyme also from irreversible inactivation by organic solvents resulting from bimolecular aggregation and autolysis.

  12. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision.

    PubMed

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-22

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on-off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  13. Stereo imaging velocimetry for microgravity applications

    NASA Technical Reports Server (NTRS)

    Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.

    1994-01-01

    Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.

  14. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision

    NASA Astrophysics Data System (ADS)

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-01

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on–off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  15. What are hierarchical models and how do we analyze them?

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)

  16. Cox Regression Models with Functional Covariates for Survival Data.

    PubMed

    Gellar, Jonathan E; Colantuoni, Elizabeth; Needham, Dale M; Crainiceanu, Ciprian M

    2015-06-01

    We extend the Cox proportional hazards model to cases when the exposure is a densely sampled functional process, measured at baseline. The fundamental idea is to combine penalized signal regression with methods developed for mixed effects proportional hazards models. The model is fit by maximizing the penalized partial likelihood, with smoothing parameters estimated by a likelihood-based criterion such as AIC or EPIC. The model may be extended to allow for multiple functional predictors, time varying coefficients, and missing or unequally-spaced data. Methods were inspired by and applied to a study of the association between time to death after hospital discharge and daily measures of disease severity collected in the intensive care unit, among survivors of acute respiratory distress syndrome.

  17. Using an EM Covariance Matrix to Estimate Structural Equation Models with Missing Data: Choosing an Adjusted Sample Size to Improve the Accuracy of Inferences

    ERIC Educational Resources Information Center

    Enders, Craig K.; Peugh, James L.

    2004-01-01

    Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…

  18. A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments

    PubMed Central

    Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J

    2014-01-01

    Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576

  19. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.

    PubMed

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-07

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  20. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies

    PubMed Central

    Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong

    2017-01-01

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843

  1. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies

    NASA Astrophysics Data System (ADS)

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-01

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  2. Multi-Point Measurements to Characterize Radiation Belt Electron Precipitation Loss

    NASA Astrophysics Data System (ADS)

    Blum, L. W.

    2017-12-01

    Multipoint measurements in the inner magnetosphere allow the spatial and temporal evolution of various particle populations and wave modes to be disentangled. To better characterize and quantify radiation belt precipitation loss, we utilize multi-point measurements both to study precipitating electrons directly as well as the potential drivers of this loss process. Magnetically conjugate CubeSat and balloon measurements are combined to estimate of the temporal and spatial characteristics of dusk-side precipitation features and quantify loss due to these events. To then understand the drivers of precipitation events, and what determines their spatial structure, we utilize measurements from the dual Van Allen Probes to estimate spatial and temporal scales of various wave modes in the inner magnetosphere, and compare these to precipitation characteristics. The structure, timing, and spatial extent of waves are compared to those of MeV electron precipitation during a few individual events to determine when and where EMIC waves cause radiation belt electron precipitation. Magnetically conjugate measurements provide observational support of the theoretical picture of duskside interaction of EMIC waves and MeV electrons leading to radiation belt loss. Finally, understanding the drivers controlling the spatial scales of wave activity in the inner magnetosphere is critical for uncovering the underlying physics behind the wave generation as well as for better predicting where and when waves will be present. Again using multipoint measurements from the Van Allen Probes, we estimate the spatial and temporal extents and evolution of plasma structures and their gradients in the inner magnetosphere, to better understand the drivers of magnetospheric wave characteristic scales. In particular, we focus on EMIC waves and the plasma parameters important for their growth, namely cold plasma density and cool and warm ion density, anisotropy, and composition.

  3. Optimization of bump and blowing to control the flow through a transonic compressor blade cascade

    NASA Astrophysics Data System (ADS)

    Mazaheri, K.; Khatibirad, S.

    2018-03-01

    Shock control bump (SCB) and blowing are two flow control methods, used here to improve the aerodynamic performance of transonic compressors. Both methods are applied to a NASA rotor 67 blade section and are optimized to minimize the total pressure loss. A continuous adjoint algorithm is used for multi-point optimization of a SCB to improve the aerodynamic performance of the rotor blade section, for a range of operational conditions around its design point. A multi-point and two single-point optimizations are performed in the design and off-design conditions. It is shown that the single-point optimized shapes have the best performance for their respective operating conditions, but the multi-point one has an overall better performance over the whole operating range. An analysis is given regarding how similarly both single- and multi-point optimized SCBs change the wave structure between blade sections resulting in a more favorable flow pattern. Interactions of the SCB with the boundary layer and the wave structure, and its effects on the separation regions are also studied. We have also introduced the concept of blowing for control of shock wave and boundary-layer interaction. A geometrical model is introduced, and the geometrical and physical parameters of blowing are optimized at the design point. The performance improvements of blowing are compared with the SCB. The physical interactions of SCB with the boundary layer and the shock wave are analyzed. The effects of SCB on the wave structure in the flow domain outside the boundary-layer region are investigated. It is shown that the effects of the blowing mechanism are very similar to the SCB.

  4. Regulation of protein multipoint adsorption on ion-exchange adsorbent and its application to the purification of macromolecules.

    PubMed

    Huang, Yongdong; Bi, Jingxiu; Zhao, Lan; Ma, Guanghui; Su, Zhiguo

    2010-12-01

    Ion-exchange chromatography (IEC) using commercial ionic absorbents is a widely used technique for protein purification. Protein adsorption onto ion-exchange adsorbents often involves a multipoint adsorption. In IEC of multimeric proteins or "soft" proteins, the intense multipoint binding would make the further desorption difficult, even lead to the destruction of protein structure and the loss of its biological activity. In this paper, DEAE Sepharose FF adsorbents with controllable ligand densities from 0.020 to 0.183 mmol/ml were synthesized, and then the effect of ligand density on the static ion-exchange adsorption of bovine serum albumin (BSA) onto DEAE Sepharose FF was studied by batch adsorption technique. Steric mass-action (SMA) model was employed to analyze the static adsorption behavior. The results showed that the SMA model parameters, equilibrium constant (K(a)), characteristic number of binding sites (υ) and steric factor (σ), increased gradually with ligand density. Thus, it was feasible to regulate BSA multipoint adsorption by modulating the ligand density of ion-exchange adsorbent. Furthermore, IEC of hepatitis B surface antigen (HBsAg) using DEAE Sepharose FF adsorbents with different ligand densities was carried out, and the activity recovery of HBsAg was improved from 42% to 67% when the ligand density was decreased from 0.183 to 0.020 mmol/ml. Taking the activity recovery of HBsAg, the purification factor and the binding capacity into account, DEAE Sepharose FF with a ligand density of 0.041 mmol/ml was most effective for the purification of HBsAg. Such a strategy may also be beneficial for the purification of macromolecules and multimeric proteins. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Genetic map of the spinocerebellar ataxia type 2 (SCA2) region on chromosome 12

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nechiporuk, A.; Frederick, T.; Pulst, S.M.

    1994-09-01

    The autosomal dominant ataxias (SCAs) are a clinically and genetically heterogeneous group of neurodegenerative diseases characterized by progressive ataxia. At least four gene loci have been identified: SCA1 on chromosome (CHR) 6, SCA2 on CHR12, Machado-Joseph disease on CHR14, and SCA families that are not linked to any of the above loci. In addition, the gene causing dentato-rubro-pallido-luysian atrophy has been identified as an expanded CAG repeat on CHR 12p. As a necessary step in identifying the gene for SCA2, we now identified closer flanking markers. To do this we ordered microsatellite markers in the now identified closer flanking markers.more » To do this we ordered microsatellite markers in the region and then determined pairwise and multipoint lod scores between the markers and SCA2 in three large pedigrees with SCA. The following order was established with odds > 1,000:1 using six non-SCA pedigrees: D12S101-7.1cM-D12S58-0cM-IGF1-3.6cM-D12S78-1.4cM-D12S317-3.7cM-D12S84-0cM-D12S105-7.2cM-D12S79-7.0cM-PLA2. Using this ordered set of markers we examined linkage to SCA2 in three pedigrees of Italian, Austrian and French-Canadian descent. Pairwise linkage analysis resulted in significant positive lod scores for all markers. The highest pairwise lod score was obtained with D12S84/D12S105 (Z{sub max}=7.98, theta{sub max}=0.05). To further define the location of SCA2, we performed multipoint linkage analysis using the genetic map established above. The highest location score was obtained between D12S317 and D12S84/D12S105. A location of SCA2 between these loci was favored with odds > 100:1. These data likely narrow the SCA2 candidate region to approximately 3.7 cM. The relatively large large number of markers tightly linked to SCA2 will facilitate the assignment of additional SCA pedigrees to CHR12, and will help in the presymptomatic diagnosis of individuals in families with proven linkage to CHR12.« less

  6. A Major Locus for Fasting Insulin Concentrations and Insulin Resistance on Chromosome 6q with Strong Pleiotropic Effects on Obesity-Related Phenotypes in Nondiabetic Mexican Americans

    PubMed Central

    Duggirala, Ravindranath; Blangero, John; Almasy, Laura; Arya, Rector; Dyer, Thomas D.; Williams, Kenneth L.; Leach, Robin J.; O’Connell, Peter; Stern, Michael P.

    2001-01-01

    Insulin resistance and hyperinsulinemia are strong correlates of obesity and type 2 diabetes, but little is known about their genetic determinants. Using data on nondiabetics from Mexican American families and a multipoint linkage approach, we scanned the genome and identified a major locus near marker D6S403 for fasting “true” insulin levels (LOD score 4.1, empirical P<.0001), which do not crossreact with insulin precursors. Insulin resistance, as assessed by the homeostasis model using fasting glucose and specific insulin (FSI) values, was also strongly linked (LOD score 3.5, empirical P<.0001) with this region. Two other regions across the genome were found to be suggestively linked to FSI: a location on chromosome 2q, near marker D2S141, and another location on chromosome 6q, near marker D6S264. Since several insulin-resistance syndrome (IRS)–related phenotypes were mapped independently to the regions on chromosome 6q, we conducted bivariate multipoint linkage analyses to map the correlated IRS phenotypes. These analyses implicated the same chromosomal region near marker D6S403 (6q22-q23) as harboring a major gene with strong pleiotropic effects on obesity and on lipid measures, including leptin concentrations (e.g., LODeq for traits-specific insulin and leptin was 4.7). A positional candidate gene for insulin resistance in this chromosomal region is the plasma cell-membrane glycoprotein PC-1 (6q22-q23). The genetic location on chromosome 6q, near marker D6S264 (6q25.2-q26), was also identified by the bivariate analysis as exerting significant pleiotropic influences on IRS-related phenotypes (e.g., LODeq for traits-specific insulin and leptin was 4.1). This chromosomal region harbors positional candidate genes, such as the insulin-like growth factor 2 receptor (IGF2R, 6q26) and acetyl-CoA acetyltransferase 2 (ACAT2, 6q25.3-q26). In sum, we found substantial evidence for susceptibility loci on chromosome 6q that influence insulin concentrations and other IRS-related phenotypes in Mexican Americans. PMID:11283790

  7. Clinical Paresthesia Atlas Illustrates Likelihood of Coverage Based on Spinal Cord Stimulator Electrode Location.

    PubMed

    Taghva, Alexander; Karst, Edward; Underwood, Paul

    2017-08-01

    Concordant paresthesia coverage is an independent predictor of pain relief following spinal cord stimulation (SCS). Using aggregate data, our objective is to produce a map of paresthesia coverage as a function of electrode location in SCS. This retrospective analysis used x-rays, SCS programming data, and paresthesia coverage maps from the EMPOWER registry of SCS implants for chronic neuropathic pain. Spinal level of dorsal column stimulation was determined by x-ray adjudication and active cathodes in patient programs. Likelihood of paresthesia coverage was determined as a function of stimulating electrode location. Segments of paresthesia coverage were grouped anatomically. Fisher's exact test was used to identify significant differences in likelihood of paresthesia coverage as a function of spinal stimulation level. In the 178 patients analyzed, the most prevalent areas of paresthesia coverage were buttocks, anterior and posterior thigh (each 98%), and low back (94%). Unwanted paresthesia at the ribs occurred in 8% of patients. There were significant differences in the likelihood of achieving paresthesia, with higher thoracic levels (T5, T6, and T7) more likely to achieve low back coverage but also more likely to introduce paresthesia felt at the ribs. Higher levels in the thoracic spine were associated with greater coverage of the buttocks, back, and thigh, and with lesser coverage of the leg and foot. This paresthesia atlas uses real-world, aggregate data to determine likelihood of paresthesia coverage as a function of stimulating electrode location. It represents an application of "big data" techniques, and a step toward achieving personalized SCS therapy tailored to the individual's chronic pain. © 2017 International Neuromodulation Society.

  8. Distance-Based Phylogenetic Methods Around a Polytomy.

    PubMed

    Davidson, Ruth; Sullivant, Seth

    2014-01-01

    Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.

  9. Correlation between the Hurst exponent and the maximal Lyapunov exponent: Examining some low-dimensional conservative maps

    NASA Astrophysics Data System (ADS)

    Tarnopolski, Mariusz

    2018-01-01

    The Chirikov standard map and the 2D Froeschlé map are investigated. A few thousand values of the Hurst exponent (HE) and the maximal Lyapunov exponent (mLE) are plotted in a mixed space of the nonlinear parameter versus the initial condition. Both characteristic exponents reveal remarkably similar structures in this space. A tight correlation between the HEs and mLEs is found, with the Spearman rank ρ = 0 . 83 and ρ = 0 . 75 for the Chirikov and 2D Froeschlé maps, respectively. Based on this relation, a machine learning (ML) procedure, using the nearest neighbor algorithm, is performed to reproduce the HE distribution based on the mLE distribution alone. A few thousand HE and mLE values from the mixed spaces were used for training, and then using 2 - 2 . 4 × 105 mLEs, the HEs were retrieved. The ML procedure allowed to reproduce the structure of the mixed spaces in great detail.

  10. Functional reorganisation in chronic pain and neural correlates of pain sensitisation: A coordinate based meta-analysis of 266 cutaneous pain fMRI studies.

    PubMed

    Tanasescu, Radu; Cottam, William J; Condon, Laura; Tench, Christopher R; Auer, Dorothee P

    2016-09-01

    Maladaptive mechanisms of pain processing in chronic pain conditions (CP) are poorly understood. We used coordinate based meta-analysis of 266 fMRI pain studies to study functional brain reorganisation in CP and experimental models of hyperalgesia. The pattern of nociceptive brain activation was similar in CP, hyperalgesia and normalgesia in controls. However, elevated likelihood of activation was detected in the left putamen, left frontal gyrus and right insula in CP comparing stimuli of the most painful vs. other site. Meta-analysis of contrast maps showed no difference between CP, controls, mood conditions. In contrast, experimental hyperalgesia induced stronger activation in the bilateral insula, left cingulate and right frontal gyrus. Activation likelihood maps support a shared neural pain signature of cutaneous nociception in CP and controls. We also present a double dissociation between neural correlates of transient and persistent pain sensitisation with general increased activation intensity but unchanged pattern in experimental hyperalgesia and, by contrast, focally increased activation likelihood, but unchanged intensity, in CP when stimulated at the most painful body part. Copyright © 2016. Published by Elsevier Ltd.

  11. Satellite information on Orlando, Florida. [coordination of LANDSAT and Skylab data and EREP photography

    NASA Technical Reports Server (NTRS)

    Hannah, J. W.; Thomas, G. L.; Esparza, F.

    1975-01-01

    A land use map of Orange County, Florida was prepared from EREP photography while LANDSAT and EREP multispectral scanner data were used to provide more detailed information on Orlando and its suburbs. The generalized maps were prepared by tracing the patterns on an overlay, using an enlarging viewer. Digital analysis of the multispectral scanner data was basically the maximum likelihood classification method with training sample input and computer printer mapping of the results. Urban features delineated by the maps are discussed. It is concluded that computer classification, accompanied by human interpretation and manual simplification can produce land use maps which are useful on a regional, county, and city basis.

  12. Box-Cox transformation for QTL mapping.

    PubMed

    Yang, Runqing; Yi, Nengjun; Xu, Shizhong

    2006-01-01

    The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.

  13. Comparison of multipoint linkage analyses for quantitative traits in the CEPH data: parametric LOD scores, variance components LOD scores, and Bayes factors.

    PubMed

    Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M

    2007-01-01

    We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.

  14. Comparison of multipoint linkage analyses for quantitative traits in the CEPH data: parametric LOD scores, variance components LOD scores, and Bayes factors

    PubMed Central

    Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M

    2007-01-01

    We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus. PMID:18466597

  15. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    NASA Astrophysics Data System (ADS)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  16. Dynamic Histogram Analysis To Determine Free Energies and Rates from Biased Simulations.

    PubMed

    Stelzl, Lukas S; Kells, Adam; Rosta, Edina; Hummer, Gerhard

    2017-12-12

    We present an algorithm to calculate free energies and rates from molecular simulations on biased potential energy surfaces. As input, it uses the accumulated times spent in each state or bin of a histogram and counts of transitions between them. Optimal unbiased equilibrium free energies for each of the states/bins are then obtained by maximizing the likelihood of a master equation (i.e., first-order kinetic rate model). The resulting free energies also determine the optimal rate coefficients for transitions between the states or bins on the biased potentials. Unbiased rates can be estimated, e.g., by imposing a linear free energy condition in the likelihood maximization. The resulting "dynamic histogram analysis method extended to detailed balance" (DHAMed) builds on the DHAM method. It is also closely related to the transition-based reweighting analysis method (TRAM) and the discrete TRAM (dTRAM). However, in the continuous-time formulation of DHAMed, the detailed balance constraints are more easily accounted for, resulting in compact expressions amenable to efficient numerical treatment. DHAMed produces accurate free energies in cases where the common weighted-histogram analysis method (WHAM) for umbrella sampling fails because of slow dynamics within the windows. Even in the limit of completely uncorrelated data, where WHAM is optimal in the maximum-likelihood sense, DHAMed results are nearly indistinguishable. We illustrate DHAMed with applications to ion channel conduction, RNA duplex formation, α-helix folding, and rate calculations from accelerated molecular dynamics. DHAMed can also be used to construct Markov state models from biased or replica-exchange molecular dynamics simulations. By using binless WHAM formulated as a numerical minimization problem, the bias factors for the individual states can be determined efficiently in a preprocessing step and, if needed, optimized globally afterward.

  17. Space Technology 5 Multipoint Observations of Temporal and Spatial Variability of Field-Aligned Currents

    NASA Technical Reports Server (NTRS)

    Le, G.; Wang, Y.; Slavin, J. A.; Strangeway, R. L.

    2009-01-01

    Space Technology 5 (ST5) is a constellation mission consisting of three microsatellites. It provides the first multipoint magnetic field measurements in low Earth orbit, which enables us to separate spatial and temporal variations. In this paper, we present a study of the temporal variability of field-aligned currents using the ST5 data. We examine the field-aligned current observations during and after a geomagnetic storm and compare the magnetic field profiles at the three spacecraft. The multipoint data demonstrate that mesoscale current structures, commonly embedded within large-scale current sheets, are very dynamic with highly variable current density and/or polarity in approx.10 min time scales. On the other hand, the data also show that the time scales for the currents to be relatively stable are approx.1 min for mesoscale currents and approx.10 min for large-scale currents. These temporal features are very likely associated with dynamic variations of their charge carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of mesoscale field-aligned currents are found to be consistent with those of auroral parallel electric field.

  18. Exploring microwave resonant multi-point ignition using high-speed schlieren imaging

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Zhang, Guixin; Xie, Hong; Deng, Lei; Wang, Zhi

    2018-03-01

    Microwave plasma offers a potential method to achieve rapid combustion in a high-speed combustor. In this paper, microwave resonant multi-point ignition and its control method have been studied via high-speed schlieren imaging. The experiment was conducted with the microwave resonant ignition system and the schlieren optical system. The microwave pulse in 2.45 GHz with 2 ms width and 3 kW peak power was employed as an ignition energy source to produce initial flame kernels in the combustion chamber. A reflective schlieren method was designed to illustrate the flame development process with a high-speed camera. The bottom of the combustion chamber was made of a quartz glass coated with indium tin oxide, which ensures sufficient microwave reflection and light penetration. Ignition experiments were conducted at 2 bars of stoichiometric methane-air mixtures. Schlieren images show that flame kernels were generated at more than one location simultaneously and flame propagated with different speeds in different flame kernels. Ignition kernels were discussed in three types according to their appearances. Pressure curves and combustion duration also show that multi-point ignition plays a significant role in accelerating combustion.

  19. Exploring microwave resonant multi-point ignition using high-speed schlieren imaging.

    PubMed

    Liu, Cheng; Zhang, Guixin; Xie, Hong; Deng, Lei; Wang, Zhi

    2018-03-01

    Microwave plasma offers a potential method to achieve rapid combustion in a high-speed combustor. In this paper, microwave resonant multi-point ignition and its control method have been studied via high-speed schlieren imaging. The experiment was conducted with the microwave resonant ignition system and the schlieren optical system. The microwave pulse in 2.45 GHz with 2 ms width and 3 kW peak power was employed as an ignition energy source to produce initial flame kernels in the combustion chamber. A reflective schlieren method was designed to illustrate the flame development process with a high-speed camera. The bottom of the combustion chamber was made of a quartz glass coated with indium tin oxide, which ensures sufficient microwave reflection and light penetration. Ignition experiments were conducted at 2 bars of stoichiometric methane-air mixtures. Schlieren images show that flame kernels were generated at more than one location simultaneously and flame propagated with different speeds in different flame kernels. Ignition kernels were discussed in three types according to their appearances. Pressure curves and combustion duration also show that multi-point ignition plays a significant role in accelerating combustion.

  20. Maximizing the significance in Higgs boson pair analyses [Mad-Maximized Higgs Pair Analyses

    DOE PAGES

    Kling, Felix; Plehn, Tilman; Schichtel, Peter

    2017-02-22

    Here, we study Higgs pair production with a subsequent decay to a pair of photons and a pair of bottoms at the LHC. We use the log-likelihood ratio to identify the kinematic regions which either allow us to separate the di-Higgs signal from backgrounds or to determine the Higgs self-coupling. We find that both regions are separate enough to ensure that details of the background modeling will not affect the determination of the self-coupling. Assuming dominant statistical uncertainties we determine the best precision with which the Higgs self-coupling can be probed in this channel. We finally comment on the samemore » questions at a future 100 TeV collider.« less

  1. Maximizing the significance in Higgs boson pair analyses [Mad-Maximized Higgs Pair Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kling, Felix; Plehn, Tilman; Schichtel, Peter

    Here, we study Higgs pair production with a subsequent decay to a pair of photons and a pair of bottoms at the LHC. We use the log-likelihood ratio to identify the kinematic regions which either allow us to separate the di-Higgs signal from backgrounds or to determine the Higgs self-coupling. We find that both regions are separate enough to ensure that details of the background modeling will not affect the determination of the self-coupling. Assuming dominant statistical uncertainties we determine the best precision with which the Higgs self-coupling can be probed in this channel. We finally comment on the samemore » questions at a future 100 TeV collider.« less

  2. Maximizing Friend-Making Likelihood for Social Activity Organization

    DTIC Science & Technology

    2015-05-22

    selected individuals may not be able to socialize with each other effectively . 5.2 Performance Evaluation Baseline can only find the optimal solutions of...datasets to demonstrate the efficiency and effectiveness of our proposed algorithm. 1 Introduction With the popularity and accessibility of online...to-face activities are initiated in Meetup3. The activities organized via OSNs cover a wide variety of purposes, e.g., friend gatherings, cocktail

  3. Biological Weapons and Modern Warfare

    DTIC Science & Technology

    1991-04-01

    every preparation for reducing Its effectiveness and thereby reduce the likelihood of Its use. In order to plan such preparation, It is advantageous to...attack rates could be maximized and the forces using the weapon protected from its effects . In today’s climate, BW agents are also attractive weapons...questions about the agreement’s true effectiveness . Verification of compliance was not addressed. D. World War It: Events during and following World War

  4. Current Approaches to Improving Marine Geophysical Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Jencks, J. H.; Cartwright, J.; Varner, J. D.; Anderson, C.; Robertson, E.; McLean, S. J.

    2016-02-01

    Exploring, understanding, and managing the global oceans is a challenge when hydrographic maps are available for only 5% of the world's oceans, even less of which have been mapped geologically or to identify benthic habitats. Seafloor mapping is expensive and most government and academic budgets continue to tighten. The first step for any mapping program, before setting out to map uncharted waters, should be to identify if data currently exist in the area of interest. There are many reasons why this seemingly simple suggestion is not commonplace. While certain datasets are accessible online (e.g., NOAA's NCEI, EMODnet, IHO-DCDB), many are not. In some cases, data that are publicly available are difficult to discover and access. No single agency can successfully resolve the complex and pressing demands of ocean and coastal mapping and the associated data stewardship. NOAA partners with other federal agencies to provide an integrated approach to carry out a coordinated and comprehensive ocean and coastal mapping program. In order to maximize the return on their mapping investment, legacy and newly acquired data must be easily discoverable and readily accessible by numerous applications and formats now and well into the future. At NOAA's National Centers for Environmental Information (NCEI), resources are focused on ensuring the security and widespread availability of the Nation's scientific marine geophysical data through long-term stewardship. The public value of these data and products is maximized by streamlining data acquisition and processing operations, minimizing redundancies, facilitating discovery, and developing common standards to promote re-use. For its part, NCEI draws on a variety of software technologies and adheres to international standards to meet this challenge. The result is a geospatial framework built on spatially-enabled databases, standards-based web services, and International Standards Organization (ISO) metadata. In order to maximize effectiveness in ocean and coastal mapping, we must be sure that limited funding is not being used to collect data in areas where data already exist. By making data more accessible, NCEI extends the use of, and therefore the value of, these data. Working together, we can ensure that valuable data are made available to the broadest community.

  5. A methodology for the generation of the 2-D map from unknown navigation environment by traveling a short distance

    NASA Technical Reports Server (NTRS)

    Bourbakis, N.; Sarkar, D.

    1994-01-01

    A technique for generation of a 2-D space map by traveling a short distance is described. The space to be mapped can be classified as: (1) space without obstacles, (2) space with stationary obstacles, and (3) space with moving obstacles. This paper presents the methodology used to generate a 2-D map of an unknown navigation space. The ability to minimize the redundancy during traveling and maximize the confidence function for generation of the map are advantages of this technique.

  6. Refining the region of branchio-oto-renal syndrome and defining the flanking markers on chromosome 8q by genetic mapping.

    PubMed Central

    Kumar, S.; Kimberling, W. J.; Connolly, C. J.; Tinley, S.; Marres, H. A.; Cremers, C. W.

    1994-01-01

    Branchio-oto-renal syndrome (BOR) is an autosomal dominant disorder associated with external-, middle-, and inner-ear malformations, branchial cleft sinuses, cervical fistulas, mixed hearing loss, and renal anomalies. The gene for BOR was mapped to the long arm of chromosome 8q. Several polymorphic dinucleotide repeat markers were investigated for linkage in two large BOR families, and the region of localization was refined. Two-point linkage analysis yielded the maximum lod scores of 7.44 at theta = .03 and 6.71 at theta = .04, with markers D8S279 and D8S260, respectively. A multipoint analysis was carried out to position the BOR gene with a defined region using markers D8S165, D8S285, PENK, D8S166, D8S260, D8S279, D8S164, D8S286, D8S84, D8S275, D8S167, D8S273, and D8S271. Haplotype analysis of recombination events at these polymorphic loci was also performed in multigeneration BOR kindreds. The linkage analysis and analysis of recombination events identified markers that clearly flank the BOR locus. The order was determined to be D8S260-BOR-D8S279 at odds > 10(3):1 over the other possible orders. This flanking markers provide a resource for high-resolution mapping toward cloning and characterizing the BOR gene. PMID:7977379

  7. Defining the proximal border of the Huntington disease candidate region by multipoint recombination analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skraastad, M.I.; De Rooij, K.E.; De Koning Gans, P.A.M.

    1993-06-01

    The candidate region for the Huntington disease (HD) gene has been narrowed down to a 2.2-Mb region between D4S10 and D4S98 on the short arm of chromosome 4. To map the HD gene within this candidate region 65 Dutch HD families were studied. In total 338 informative meioses were analyzed and 11 multiple informative crossovers were detected. Assuming a minimum number of recombinations and no double recombinations, the multiple informative crossovers are consistent with one specific genetic order for 12 loci: D4S10-(D4S81,D4S126)-D4S125-(D4S127,D4S95)-D4S43-(D4S115, D4S96, D4S111, D4S90, D4S141).This is in agreement with the known data derived from similar and other methods. Themore » loci between brackets could not be mapped relative to each other. In the family material, two informative three-point marker recombination events were detected in the proximal HD candidate region, which are also informative for HD. Both recombination events map the HD gene distal to D4S81 and most likely distal to D4S125, narrowing down the HD candidate region to a 1.7-Mb region between D4S125 and D4S98. 39 refs., 3 figs., 2 tabs.« less

  8. Linkage analysis of autopsy-confirmed familial Alzheimer disease supports an Alzheimer disease locus in 8q24.

    PubMed

    Sillén, Anna; Brohede, Jesper; Forsell, Charlotte; Lilius, Lena; Andrade, Jorge; Odeberg, Jacob; Kimura, Toru; Winblad, Bengt; Graff, Caroline

    2011-01-01

    We have previously reported the results of an extended genome-wide scan of Swedish Alzheimer disease (AD)-affected families; in this paper, we analyzed a subset of these families with autopsy-confirmed AD. We report the fine-mapping, using both microsatellite markers and single-nucleotide polymorphisms (SNPs), in the observed maximum logarithm of the odds (LOD)-2 unit (LOD(max)-2) region under the identified linkage peak, linkage analysis of the fine-mapping data with additionally analyzed pedigrees, and association analysis of SNPs selected from candidate genes in the linked interval. The subset was made on the criterion of at least one autopsy-confirmed AD case per family, resulting in 24 families. Linkage analysis of a family subset having at least one autopsy-confirmed AD case showed a significant nonparametric single-point LOD score of 4.4 in 8q24. Fine-mapping under the linkage peak with 10 microsatellite markers yielded an increase in the multipoint (mpt) LOD score from 2.1 to 3.0. SNP genotyping was performed on 21 selected candidate transcripts of the LOD(max)-2 region. Both family-based association and linkage analysis were performed on extended material from 30 families, resulting in a suggestive linkage at peak marker rs6577853 (mpt LOD score = 2.4). The 8q24 region has been implicated to be involved in AD etiology. Copyright © 2011 S. Karger AG, Basel.

  9. Gaussian statistics of the cosmic microwave background: Correlation of temperature extrema in the COBE DMR two-year sky maps

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.

    1995-01-01

    We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.

  10. Mass and Volume Optimization of Space Flight Medical Kits

    NASA Technical Reports Server (NTRS)

    Keenan, A. B.; Foy, Millennia Hope; Myers, Jerry

    2014-01-01

    Resource allocation is a critical aspect of space mission planning. All resources, including medical resources, are subject to a number of mission constraints such a maximum mass and volume. However, unlike many resources, there is often limited understanding in how to optimize medical resources for a mission. The Integrated Medical Model (IMM) is a probabilistic model that estimates medical event occurrences and mission outcomes for different mission profiles. IMM simulates outcomes and describes the impact of medical events in terms of lost crew time, medical resource usage, and the potential for medically required evacuation. Previously published work describes an approach that uses the IMM to generate optimized medical kits that maximize benefit to the crew subject to mass and volume constraints. We improve upon the results obtained previously and extend our approach to minimize mass and volume while meeting some benefit threshold. METHODS We frame the medical kit optimization problem as a modified knapsack problem and implement an algorithm utilizing dynamic programming. Using this algorithm, optimized medical kits were generated for 3 mission scenarios with the goal of minimizing the medical kit mass and volume for a specified likelihood of evacuation or Crew Health Index (CHI) threshold. The algorithm was expanded to generate medical kits that maximize likelihood of evacuation or CHI subject to mass and volume constraints. RESULTS AND CONCLUSIONS In maximizing benefit to crew health subject to certain constraints, our algorithm generates medical kits that more closely resemble the unlimited-resource scenario than previous approaches which leverage medical risk information generated by the IMM. Our work here demonstrates that this algorithm provides an efficient and effective means to objectively allocate medical resources for spaceflight missions and provides an effective means of addressing tradeoffs in medical resource allocations and crew mission success parameters.

  11. A high-density transcript linkage map with 1,845 expressed genes positioned by microarray-based Single Feature Polymorphisms (SFP) in Eucalyptus

    PubMed Central

    2011-01-01

    Background Technological advances are progressively increasing the application of genomics to a wider array of economically and ecologically important species. High-density maps enriched for transcribed genes facilitate the discovery of connections between genes and phenotypes. We report the construction of a high-density linkage map of expressed genes for the heterozygous genome of Eucalyptus using Single Feature Polymorphism (SFP) markers. Results SFP discovery and mapping was achieved using pseudo-testcross screening and selective mapping to simultaneously optimize linkage mapping and microarray costs. SFP genotyping was carried out by hybridizing complementary RNA prepared from 4.5 year-old trees xylem to an SFP array containing 103,000 25-mer oligonucleotide probes representing 20,726 unigenes derived from a modest size expressed sequence tags collection. An SFP-mapping microarray with 43,777 selected candidate SFP probes representing 15,698 genes was subsequently designed and used to genotype SFPs in a larger subset of the segregating population drawn by selective mapping. A total of 1,845 genes were mapped, with 884 of them ordered with high likelihood support on a framework map anchored to 180 microsatellites with average density of 1.2 cM. Using more probes per unigene increased by two-fold the likelihood of detecting segregating SFPs eventually resulting in more genes mapped. In silico validation showed that 87% of the SFPs map to the expected location on the 4.5X draft sequence of the Eucalyptus grandis genome. Conclusions The Eucalyptus 1,845 gene map is the most highly enriched map for transcriptional information for any forest tree species to date. It represents a major improvement on the number of genes previously positioned on Eucalyptus maps and provides an initial glimpse at the gene space for this global tree genome. A general protocol is proposed to build high-density transcript linkage maps in less characterized plant species by SFP genotyping with a concurrent objective of reducing microarray costs. HIgh-density gene-rich maps represent a powerful resource to assist gene discovery endeavors when used in combination with QTL and association mapping and should be especially valuable to assist the assembly of reference genome sequences soon to come for several plant and animal species. PMID:21492453

  12. PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budavári, Tamás; Basu, Amitabh, E-mail: budavari@jhu.edu, E-mail: basu.amitabh@jhu.edu

    2016-10-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  13. Probabilistic Cross-identification in Crowded Fields as an Assignment Problem

    NASA Astrophysics Data System (ADS)

    Budavári, Tamás; Basu, Amitabh

    2016-10-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  14. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  15. A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.

    PubMed

    Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua

    2017-07-01

    Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.

  16. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  17. Deterministic annealing for density estimation by multivariate normal mixtures

    NASA Astrophysics Data System (ADS)

    Kloppenburg, Martin; Tavan, Paul

    1997-03-01

    An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.

  18. Truncating mutation in the NHS gene: phenotypic heterogeneity of Nance-Horan syndrome in an asian Indian family.

    PubMed

    Ramprasad, Vedam Lakshmi; Thool, Alka; Murugan, Sakthivel; Nancarrow, Derek; Vyas, Prateep; Rao, Srinivas Kamalakar; Vidhya, Authiappan; Ravishankar, Krishnamoorthy; Kumaramanickavel, Govindasamy

    2005-01-01

    A four-generation family containing eight affected males who inherited X-linked developmental lens opacity and microcornea was studied. Some members in the family had mild to moderate nonocular clinical features suggestive of Nance-Horan syndrome. The purpose of the study was to map genetically the gene in the large 57-live-member Asian-Indian pedigree. PCR-based genotyping was performed on the X-chromosome, by using fluorescent microsatellite markers (10-cM intervals). Parametric linkage analysis was performed by using two disease models, assuming either recessive or dominant X-linked transmission by the MLINK/ILINK and FASTLINK (version 4.1P) programs (http:www.hgmp.mrc.ac.uk/; provided in the public domain by the Human Genome Mapping Project Resources Centre, Cambridge, UK). The NHS gene at the linked region was screened for mutation. By fine mapping, the disease gene was localized to Xp22.13. Multipoint analysis placed the peak LOD of 4.46 at DSX987. The NHS gene mapped to this region. Mutational screening in all the affected males and carrier females (heterozygous form) revealed a truncating mutation 115C-->T in exon 1, resulting in conversion of glutamine to stop codon (Q39X), but was not observed in unaffected individuals and control subjects. conclusions. A family with X-linked Nance-Horan syndrome had severe ocular, but mild to moderate nonocular, features. The clinical phenotype of the truncating mutation (Q39X) in the NHS gene suggests allelic heterogeneity at the NHS locus or the presence of modifier genes. X-linked families with cataract should be carefully examined for both ocular and nonocular features, to exclude Nance-Horan syndrome. RT-PCR analysis did not suggest nonsense-mediated mRNA decay as the possible mechanism for clinical heterogeneity.

  19. Coupling between arterial pressure, cerebral blood velocity, and cerebral tissue oxygenation with spontaneous and forced oscillations.

    PubMed

    Rickards, Caroline A; Sprick, Justin D; Colby, Hannah B; Kay, Victoria L; Tzeng, Yu-Chieh

    2015-04-01

    We tested the hypothesis that transmission of arterial pressure to brain tissue oxygenation is low under conditions of arterial pressure instability. Two experimental models of hemodynamic instability were used in healthy human volunteers; (1) oscillatory lower body negative pressure (OLBNP) (N = 8; 5 male, 3 female), and; (2) maximal LBNP to presyncope (N = 21; 13 male, 8 female). Mean arterial pressure (MAP), middle cerebral artery velocity (MCAv), and cerebral tissue oxygen saturation (ScO2) were measured non-invasively. For the OLBNP protocol, between 0 and -60 mmHg negative pressure was applied for 20 cycles at 0.05 Hz, then 20 cycles at 0.1 Hz. For the maximal LBNP protocol, progressive 5 min stages of chamber decompression were applied until the onset of presyncope. Spectral power of MAP, mean MCAv, and ScO2 were calculated within the VLF (0.04-0.07 Hz), and LF (0.07-0.2 Hz) ranges, and cross-spectral coherence was calculated for MAP-mean MCAv, MAP-ScO2, and mean MCAv-ScO2 at baseline, during each OLBNP protocol, and at the level prior to pre-syncope during maximal LBNP (sub-max). The key findings are (1) both 0.1 Hz OLBNP and sub-max LBNP elicited increases in LF power for MAP, mean MCAv, and ScO2 (p ≤ 0.08); (2) 0.05 Hz OLBNP increased VLF power in MAP and ScO2 only (p ≤ 0.06); (3) coherence between MAP-mean MCAv was consistently higher (≥0.71) compared with MAP-ScO2, and mean MCAv-ScO2 (≤0.43) during both OLBNP protocols, and sub-max LBNP (p ≤ 0.04). These data indicate high linearity between pressure and cerebral blood flow variations, but reduced linearity between cerebral tissue oxygenation and both arterial pressure and cerebral blood flow. Measuring arterial pressure variability may not always provide adequate information about the downstream effects on cerebral tissue oxygenation, the key end-point of interest for neuronal viability.

  20. Synchronous high speed multi-point velocity profile measurement by heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Hou, Xueqin; Xiao, Wen; Chen, Zonghui; Qin, Xiaodong; Pan, Feng

    2017-02-01

    This paper presents a synchronous multipoint velocity profile measurement system, which acquires the vibration velocities as well as images of vibrating objects by combining optical heterodyne interferometry and a high-speed CMOS-DVR camera. The high-speed CMOS-DVR camera records a sequence of images of the vibrating object. Then, by extracting and processing multiple pixels at the same time, a digital demodulation technique is implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. This method is validated with an experiment. A piezoelectric ceramic plate with standard vibration characteristics is used as the vibrating target, which is driven by a standard sinusoidal signal.

  1. Injectors for Multipoint Injection

    NASA Technical Reports Server (NTRS)

    Prociw, Lev Alexander (Inventor); Ryon, Jason (Inventor)

    2015-01-01

    An injector for a multipoint combustor system includes an inner air swirler which defines an interior flow passage and a plurality of swirler inlet ports in an upstream portion thereof. The inlet ports are configured and adapted to impart swirl on flow in the interior flow passage. An outer air cap is mounted outboard of the inner swirler. A fuel passage is defined between the inner air swirler and the outer air cap, and includes a discharge outlet between downstream portions of the inner air swirler and the outer air cap for issuing fuel for combustion. The outer air cap defines an outer air circuit configured for substantially unswirled injection of compressor discharge air outboard of the interior flow passage.

  2. A measurement technique of time-dependent dielectric breakdown in MOS capacitors

    NASA Technical Reports Server (NTRS)

    Li, S. P.

    1974-01-01

    The statistical nature of time-dependent dielectric breakdown characteristics in MOS capacitors was evidenced by testing large numbers of capacitors fabricated on single wafers. A multipoint probe and automatic electronic visual display technique are introduced that will yield statistical results which are necessary for the investigation of temperature, electric field, thermal annealing, and radiation effects in the breakdown characteristics, and an interpretation of the physical mechanisms involved. It is shown that capacitors of area greater than 0.002 sq cm may yield worst-case results, and that a multipoint probe of capacitors of smaller sizes can be used to obtain a profile of nonuniformities in the SiO2 films.

  3. Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems

    NASA Astrophysics Data System (ADS)

    Arrarás, A.; Portero, L.; Yotov, I.

    2014-01-01

    We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.

  4. Multipoint vibrometry with dynamic and static holograms.

    PubMed

    Haist, T; Lingel, C; Osten, W; Winter, M; Giesen, M; Ritter, F; Sandfort, K; Rembe, C; Bendel, K

    2013-12-01

    We report on two multipoint vibrometers with user-adjustable position of the measurement spots. Both systems are using holograms for beam deflection. The measurement is based on heterodyne interferometry with a frequency difference of 5 MHz between reference and object beam. One of the systems uses programmable positioning of the spots in the object volume but is limited concerning the light efficiency. The other system is based on static holograms in combination with mechanical adjustment of the measurement spots and does not have such a general efficiency restriction. Design considerations are given and we show measurement results for both systems. In addition, we analyze the sensitivity of the systems which is a major limitation compared to single point scanning systems.

  5. τ mapping of the autofluorescence of the human ocular fundus

    NASA Astrophysics Data System (ADS)

    Schweitzer, Dietrich; Kolb, Achim; Hammer, Martin; Thamm, Eike

    2000-12-01

    Changes in the autofluorescence at the living eye-ground are assumed as important mark in discovering of the pathomechanism in age-related macular degeneration. The discrimination of fluorophores is required and also the presentation of their 2D distribution. Caused by transmission of ocular media, a differentiation between fluorophores by the spectral excitation and emission range is limited. Using the laser scanner principle, the fluorescence lifetime can be measured in 2D. Keeping the maximal permissible exposure, only a very weak signal is detectable, which is optimal for application of the time- correlated single photon counting (TCSPC). In an experimental set-up, pulses of an active model locked Ar+ laser (FWHM = 300 ps, reptition rate = 77.3 MHz, selectable wavelengths: 457.9, 465.8, 472.7, 496.5, 501.7, 514.5 nm)excite the eye-ground during the scanning process. A routing module realizes the synchronization between scanning and TCSPC. Investigation of structured samples of Rhodamin 6G and of Coumarin 522 showed that a mono-exponential decay can be calculated with an error of less than 10 percent using only a few hundred photons. The maximum likelihood algorithm delivers the most correct results. A first in vivo tau-image, exhibit a lifetime of 1.5 ns in the nasal part and 5 ns at large retinal vessels.

  6. NASA Tech Briefs, April 2010

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Topics covered include: Active and Passive Hybrid Sensor; Quick-Response Thermal Actuator for Use as a Heat Switch; System for Hydrogen Sensing; Method for Detecting Perlite Compaction in Large Cryogenic Tanks; Using Thin-Film Thermometers as Heaters in Thermal Control Applications; Directional Spherical Cherenkov Detector; AlGaN Ultraviolet Detectors for Dual-Band UV Detection; K-Band Traveling-Wave Tube Amplifier; Simplified Load-Following Control for a Fuel Cell System; Modified Phase-meter for a Heterodyne Laser Interferometer; Loosely Coupled GPS-Aided Inertial Navigation System for Range Safety; Sideband-Separating, Millimeter-Wave Heterodyne Receiver; Coaxial Propellant Injectors With Faceplate Annulus Control; Adaptable Diffraction Gratings With Wavefront Transformation; Optimizing a Laser Process for Making Carbon Nanotubes; Thermogravimetric Analysis of Single-Wall Carbon Nanotubes; Robotic Arm Comprising Two Bending Segments; Magnetostrictive Brake; Low-Friction, Low-Profile, High-Moment Two-Axis Joint; Foil Gas Thrust Bearings for High-Speed Turbomachinery; Miniature Multi-Axis Mechanism for Hand Controllers; Digitally Enhanced Heterodyne Interferometry; Focusing Light Beams To Improve Atomic-Vapor Optical Buffers; Landmark Detection in Orbital Images Using Salience Histograms; Efficient Bit-to-Symbol Likelihood Mappings; Capacity Maximizing Constellations; Natural-Language Parser for PBEM; Policy Process Editor for P(sup 3)BM Software; A Quality System Database; Trajectory Optimization: OTIS 4; and Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator.

  7. Independent EEG Sources Are Dipolar

    PubMed Central

    Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott

    2012-01-01

    Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308

  8. Weakly Supervised Dictionary Learning

    NASA Astrophysics Data System (ADS)

    You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub

    2018-05-01

    We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.

  9. node2vec: Scalable Feature Learning for Networks

    PubMed Central

    Grover, Aditya; Leskovec, Jure

    2016-01-01

    Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node’s network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks. PMID:27853626

  10. Global Wind Map

    ERIC Educational Resources Information Center

    Journal of College Science Teaching, 2005

    2005-01-01

    This brief article describes a new global wind-power map that has quantified global wind power and may help planners place turbines in locations that can maximize power from the winds and provide widely available low-cost energy. The researchers report that their study can assist in locating wind farms in regions known for strong and consistent…

  11. Localization of a Susceptibility Gene for Familial Nonmedullary Thyroid Carcinoma to Chromosome 2q21

    PubMed Central

    McKay, James D.; Lesueur, Fabienne; Jonard, Laurence; Pastore, Alessandro; Williamson, Jan; Hoffman, Linda; Burgess, John; Duffield, Anne; Papotti, Mauro; Stark, Markus; Sobol, Hagay; Maes, Béatrice; Murat, Arnaud; Kääriäinen, Helena; Bertholon-Grégoire, Mireille; Zini, Michele; Rossing, Mary Anne; Toubert, Marie-Elisabeth; Bonichon, Françoise; Cavarec, Marie; Bernard, Anne-Marie; Boneu, Andrée; Leprat, Frédéric; Haas, Oskar; Lasset, Christine; Schlumberger, Martin; Canzian, Federico; Goldgar, David E.; Romeo, Giovanni

    2001-01-01

    The familial form of nonmedullary thyroid carcinoma (NMTC) is a complex genetic disorder characterized by multifocal neoplasia and a higher degree of aggressiveness than its sporadic counterpart. In a large Tasmanian pedigree (Tas1) with recurrence of papillary thyroid carcinoma (PTC), the most common form of NMTC, an extensive genomewide scan revealed a common haplotype on chromosome 2q21 in seven of the eight patients with PTC. To verify the significance of the 2q21 locus, we performed linkage analysis in an independent sample set of 80 pedigrees, yielding a multipoint heterogeneity LOD score (HLOD) of 3.07 (α=0.42), nonparametric linkage (NPL) 3.19, (P=.001) at marker D2S2271. Stratification based on the presence of at least one case of the follicular variant of PTC, the phenotype observed in the Tas1 family, identified 17 such pedigrees, yielding a maximal HLOD score of 4.17 (α=0.80) and NPL=4.99 (P=.00002) at markers AFMa272zg9 and D2S2271, respectively. These results indicate the existence of a susceptibility locus for familial NMTC on chromosome 2q21. PMID:11438887

  12. Theory and computation of optimal low- and medium-thrust transfers

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1993-01-01

    This report presents the formulation of the optimal low- and medium-thrust orbit transfer control problem and methods for numerical solution of the problem. The problem formulation is for final mass maximization and allows for second-harmonic oblateness, atmospheric drag, and three-dimensional, non-coplanar, non-aligned elliptic terminal orbits. We setup some examples to demonstrate the ability of two indirect methods to solve the resulting TPBVP's. The methods demonstrated are the multiple-point shooting method as formulated in H. J. Oberle's subroutine BOUNDSCO, and the minimizing boundary-condition method (MBCM). We find that although both methods can converge solutions, there are trade-offs to using either method. BOUNDSCO has very poor convergence for guesses that do not exhibit the correct switching structure. MBCM, however, converges for a wider range of guesses. However, BOUNDSCO's multi-point structure allows more freedom in quesses by increasing the node points as opposed to only quessing the initial state in MBCM. Finally, we note an additional drawback for BOUNDSCO: the routine does not supply information to the users routines for switching function polarity but only the location of a preset number of switching points.

  13. The utilization of mind map painting on 3D shapes with curved faces

    NASA Astrophysics Data System (ADS)

    Nur Sholikhah, Ayuk; Usodo, Budi; Pramudya, Ikrar

    2017-12-01

    This paper aims to study on the use of mind map painting media on material with 3D shapes with curved faces and its effect on student’s interest. Observation and literature studies were applied as the research method with the sake design of utilization of mind map painting. The result of this research is the design of mind map painting media can improve students' ability to solve problems, improve the ability to think, and maximize brain power. In relation, mind map painting in learning activities is considered to improve student interest.

  14. A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer

    NASA Astrophysics Data System (ADS)

    Zheng, G.; Cheng, Y.; He, K.; Duan, F.; Ma, Y.

    2014-01-01

    Sunset Semi-Continuous Carbon Analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, here we identified a new type of SCCA calculation discrepancy caused by the default multi-point baseline correction method. When exceeding a certain threshold carbon load, multi-point correction could cause significant Total Carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples with three temperature protocols. For ambient samples, 22%, 36% and 12% TC was underestimated by the three protocols, respectively, with corresponding threshold being ~0, 20 and 25 μg C. For sucrose, however, such discrepancy was observed with only one of these protocols, indicating the need of more refractory SCCA calibration substance. The discrepancy was less significant for the NIOSH (National Institute for Occupational Safety and Health)-like protocol compared with the other two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments). Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. Proposed correction method was to use multi-point corrected data when below the determined threshold, while use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.

  15. A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer

    NASA Astrophysics Data System (ADS)

    Zheng, G. J.; Cheng, Y.; He, K. B.; Duan, F. K.; Ma, Y. L.

    2014-07-01

    The Sunset semi-continuous carbon analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, in this study we identified a new type of SCCA calculation discrepancy caused by the default multipoint baseline correction method. When exceeding a certain threshold carbon load, multipoint correction could cause significant total carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples, with two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments) (i.e., IMPshort and IMPlong) and one NIOSH (National Institute for Occupational Safety and Health)-like protocol (rtNIOSH). For ambient samples, the IMPshort, IMPlong and rtNIOSH protocol underestimated 22, 36 and 12% of TC, respectively, with the corresponding threshold being ~ 0, 20 and 25 μgC. For sucrose, however, such discrepancy was observed only with the IMPshort protocol, indicating the need of more refractory SCCA calibration substance. Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. The correction method proposed was to use multipoint-corrected data when below the determined threshold, and use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.

  16. Model-based video segmentation for vision-augmented interactive games

    NASA Astrophysics Data System (ADS)

    Liu, Lurng-Kuo

    2000-04-01

    This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.

  17. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    PubMed

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  18. Advanced analysis of forest fire clustering

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail; Pereira, Mario; Golay, Jean

    2017-04-01

    Analysis of point pattern clustering is an important topic in spatial statistics and for many applications: biodiversity, epidemiology, natural hazards, geomarketing, etc. There are several fundamental approaches used to quantify spatial data clustering using topological, statistical and fractal measures. In the present research, the recently introduced multi-point Morisita index (mMI) is applied to study the spatial clustering of forest fires in Portugal. The data set consists of more than 30000 fire events covering the time period from 1975 to 2013. The distribution of forest fires is very complex and highly variable in space. mMI is a multi-point extension of the classical two-point Morisita index. In essence, mMI is estimated by covering the region under study by a grid and by computing how many times more likely it is that m points selected at random will be from the same grid cell than it would be in the case of a complete random Poisson process. By changing the number of grid cells (size of the grid cells), mMI characterizes the scaling properties of spatial clustering. From mMI, the data intrinsic dimension (fractal dimension) of the point distribution can be estimated as well. In this study, the mMI of forest fires is compared with the mMI of random patterns (RPs) generated within the validity domain defined as the forest area of Portugal. It turns out that the forest fires are highly clustered inside the validity domain in comparison with the RPs. Moreover, they demonstrate different scaling properties at different spatial scales. The results obtained from the mMI analysis are also compared with those of fractal measures of clustering - box counting and sand box counting approaches. REFERENCES Golay J., Kanevski M., Vega Orozco C., Leuenberger M., 2014: The multipoint Morisita index for the analysis of spatial patterns. Physica A, 406, 191-202. Golay J., Kanevski M. 2015: A new estimator of intrinsic dimension based on the multipoint Morisita index. Pattern Recognition, 48, 4070-4081.

  19. Symbolic Dynamics and Grammatical Complexity

    NASA Astrophysics Data System (ADS)

    Hao, Bai-Lin; Zheng, Wei-Mou

    The following sections are included: * Formal Languages and Their Complexity * Formal Language * Chomsky Hierarchy of Grammatical Complexity * The L-System * Regular Language and Finite Automaton * Finite Automaton * Regular Language * Stefan Matrix as Transfer Function for Automaton * Beyond Regular Languages * Feigenbaum and Generalized Feigenbaum Limiting Sets * Even and Odd Fibonacci Sequences * Odd Maximal Primitive Prefixes and Kneading Map * Even Maximal Primitive Prefixes and Distinct Excluded Blocks * Summary of Results

  20. Proton Density Fat Fraction Measurements at 1.5- and 3-T Hepatic MR Imaging: Same-Day Agreement among Readers and across Two Imager Manufacturers.

    PubMed

    Serai, Suraj D; Dillman, Jonathan R; Trout, Andrew T

    2017-07-01

    Purpose To determine the agreement of proton density fat fraction (PDFF) measurements obtained with hepatic magnetic resonance (MR) imaging among readers, imager manufacturers, and field strengths. Materials and Methods This HIPAA-compliant study was approved by the institutional review board. After providing informed consent, 24 adult volunteers underwent imaging with one 1.5-T MR unit (Ingenia; Philips Healthcare, Best, the Netherlands) and two different 3.0-T units (750 W [GE Healthcare, Waukesha, Wis] and Ingenia) on the same day to estimate hepatic PDFF. A single-breath-hold multipoint Dixon-based acquisition was performed with commercially available pulse sequences provided by the MR imager manufacturers (mDIXON Quant [Philips Healthcare], IDEAL IQ [GE Healthcare]). Five readers placed one large region of interest, inclusive of as much liver parenchyma as possible in the right lobe while avoiding large vessels, on imager-generated parametric maps to measure hepatic PDFF. Two-way single-measure intraclass correlation coefficients (ICCs) were used to assess interreader agreement and agreement across the three imaging platforms. Results Excellent interreader agreement for hepatic PDFF measurements was obtained with mDIXON Quant and the Philips 1.5-T unit (ICC, 0.995; 95% confidence interval [CI]: 0.991, 0.998), mDIXON Quant and the Philips 3.0-T unit (ICC, 0.992; 95% CI: 0.986, 0.996), and IDEAL IQ and the GE 3.0-T unit (ICC, 0.966; 95% CI: 0.939, 0.984). Individual reader ICCs for hepatic PDFF measurements across all three imager manufacturer-field strength combinations also showed excellent interimager agreement, ranging from 0.914 to 0.954. Conclusion Estimation of PDFF with hepatic MR imaging by using multipoint Dixon techniques is highly reproducible across readers, field strengths, and imaging platforms. © RSNA, 2017.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemaire, H.; Barat, E.; Carrel, F.

    In this work, we tested Maximum likelihood expectation-maximization (MLEM) algorithms optimized for gamma imaging applications on two recent coded mask gamma cameras. We respectively took advantage of the characteristics of the GAMPIX and Caliste HD-based gamma cameras: noise reduction thanks to mask/anti-mask procedure but limited energy resolution for GAMPIX, high energy resolution for Caliste HD. One of our short-term perspectives is the test of MAPEM algorithms integrating specific prior values for the data to reconstruct adapted to the gamma imaging topic. (authors)

  2. Model Checking with Multi-Threaded IC3 Portfolios

    DTIC Science & Technology

    2015-01-15

    different runs varies randomly depending on the thread interleaving. The use of a portfolio of solvers to maximize the likelihood of a quick solution is...empirically show (cf. Sec. 5.2) that the predictions based on this formula have high accuracy. Note that each solver in the portfolio potentially searches...speedup of over 300. We also show that widening the proof search of ic3 by randomizing its SAT solver is not as effective as paral- lelization

  3. Molecular analysis and genetic mapping of the rhodopsin gene in families with autosomal dominant retinitis pigmentosa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunge, S.; Wedemann, H.; Samanns, C.

    1993-07-01

    Eighty-eight patients/families with autosomal dominant retinitis pigmentosa (RP) were screened for rhodopsin mutations. Direct sequencing revealed 13 different mutations in a total of 14 (i.e., 16%) unrelated patients. Five of these mutations (T4K, Q28H, R135G, F220C, and C222R) have not been reported so far. In addition, multipoint linkage analysis was performed on two large families with autosomal dominant RP due to rhodopsin mutations by using five DNA probes from 3q21-q24. No tight linkage was found between the rhodopsin locus (RHO) and D3S47 ([theta][sub max] = 0.08). By six-point analysis, RHO was localized in the region between D3S21 and D3S47, withmore » a maximum lod score of 13.447 directly at D3S20. 13 refs., 1 fig., 2 tabs.« less

  4. Genetic analysis of eight loci tightly linked to neurofibromatosis 1

    PubMed Central

    Stephens, Karen; Green, Philip; Riccardi, Vincent M.; Ng, Siu; Rising, Marcia; Barker, David; Darby, John K.; Falls, Kathleen M.; Collins, Francis S.; Willard, Huntington F.; Donis-Keller, Helen

    1989-01-01

    The genetic locus for neurofibromatosis 1 (NF1) has recently been mapped to the pericentromeric region of chromosome 17. We have genotyped eight previously identified RFLP probes on 50 NF1 families to determine the placement of the NF1 locus relative to the RFLP loci. Thirty-eight recombination events in the pericentromeric region were identified, eight involving crossovers between NF1 and loci on either chromosomal arm. Multipoint linkage analysis resulted in the unique placement of six loci at odds >100:1 in the order of pter–A10-41–EW301–NF1–EW207–CRI-L581–CRI-L946–qter. Owing to insufficient crossovers, three loci–D17Z1, EW206, and EW203–could not be uniquely localized. In this region female recombination rates were significantly higher than those of males. These data were part of a joint study aimed at the localization of both NF1 and tightly linked pericentromeric markers for chromosome 17. PMID:2491775

  5. Refinement of the cone-rod retinal dystrophy locus on chromosome 19q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregory, C.Y.; Evans, K.; Bhattacharya, S.S.

    1994-11-01

    Cone-rod dystrophy (CRD) is a severe example of an inherited retinal dystrophy: ophthalmic diseases that as a group constitute the commonest causes of blindness in children in the developed world and account for a significant proportion of visual handicap in adults. Two case reports suggested loci for CRD-causing genes on chromosomes 18q and chromosome 17q. Recently, we reported the results of a total genome search that localized an autosomal dominant form of CRD to chromosome 19q in the region 19q13.1-q13.2. Since then, using data from a short tandem repeat-polymorphism linkage map of chromosome 19 and recently developed microsatellite markers inmore » this region, we have been able to further refine the localization of the chromosome 19q CRD-causing gene. Seven new microsatellite markers were used to genotype 34 affected subjects, 22 unaffected subjects, and 15 spouses. Two-point, multipoint, and FASTMAP analyses were performed. 11 refs., 1 tab.« less

  6. KSC-06pd0446

    NASA Image and Video Library

    2006-02-15

    VANDENBERG AIR FORCE BASE, CALIF. - Inside Orbital Sciences’ Building 1555 at Vandenberg Air Force Base in California, workers adjust the first half of the fairing around the Space Technology 5 (ST5) spacecraft. The ST5, which contains three microsatellites with miniaturized redundant components and technologies, is mated to its launch vehicle, Orbital Sciences' Pegasus XL. Each of the ST5 microsatellites will validate New Millennium Program selected technologies, such as the Cold Gas Micro-Thruster and X-Band Transponder Communication System. After deployment from the Pegasus, the micro-satellites will be positioned in a “string of pearls” constellation that demonstrates the ability to position them to perform simultaneous multi-point measurements of the magnetic field using highly sensitive magnetometers. The data will help scientists understand and map the intensity and direction of the Earth’s magnetic field, its relation to space weather events, and affects on our planet. Launch of ST5 and the Pegasus XL will be from underneath the belly of an L-1011 carrier aircraft on March 14 from Vandenberg Air Force Base.

  7. KSC-06pd0188

    NASA Image and Video Library

    2006-01-18

    VANDENBERG AIR FORCE BASE, Calif. — Inside Orbital Sciences’ Building 1555 at Vandenberg Air Force Base in California, the wrapped Space Technology 5 (ST5) spacecraft is ready for mating to the Pegasus XL launch vehicle. The satellites contain miniaturized redundant components and technologies. Each will validate New Millennium Program selected technologies, such as the Cold Gas Micro-Thruster and X-Band Transponder Communication System. After deployment from the Pegasus, the micro-satellites will be positioned in a “string of pearls” constellation that demonstrates the ability to position them to perform simultaneous multi-point measurements of the magnetic field using highly sensitive magnetometers. The data will help scientists understand and map the intensity and direction of the Earth’s magnetic field, its relation to space weather events, and affects on our planet. With such missions, NASA hopes to improve scientists’ ability to accurately forecast space weather and minimize its harmful effects on space- and ground-based systems. Launch of ST5 is scheduled for Feb. 28 from Vandenberg Air Force Base.

  8. KSC-06pd0187

    NASA Image and Video Library

    2006-01-18

    VANDENBERG AIR FORCE BASE, Calif. — Inside Orbital Sciences’ Building 1555 at Vandenberg Air Force Base in California, the wrapped Space Technology 5 (ST5) spacecraft is being prepared for mating to the Pegasus XL launch vehicle. The satellites contain miniaturized redundant components and technologies. Each will validate New Millennium Program selected technologies, such as the Cold Gas Micro-Thruster and X-Band Transponder Communication System. After deployment from the Pegasus, the micro-satellites will be positioned in a “string of pearls” constellation that demonstrates the ability to position them to perform simultaneous multi-point measurements of the magnetic field using highly sensitive magnetometers. The data will help scientists understand and map the intensity and direction of the Earth’s magnetic field, its relation to space weather events, and affects on our planet. With such missions, NASA hopes to improve scientists’ ability to accurately forecast space weather and minimize its harmful effects on space- and ground-based systems. Launch of ST5 is scheduled for Feb. 28 from Vandenberg Air Force Base.

  9. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    PubMed

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  10. An improved image non-blind image deblurring method based on FoEs

    NASA Astrophysics Data System (ADS)

    Zhu, Qidan; Sun, Lei

    2013-03-01

    Traditional non-blind image deblurring algorithms always use maximum a posterior(MAP). MAP estimates involving natural image priors can reduce the ripples effectively in contrast to maximum likelihood(ML). However, they have been found lacking in terms of restoration performance. Based on this issue, we utilize MAP with KL penalty to replace traditional MAP. We develop an image reconstruction algorithm that minimizes the KL divergence between the reference distribution and the prior distribution. The approximate KL penalty can restrain over-smooth caused by MAP. We use three groups of images and Harris corner detection to prove our method. The experimental results show that our algorithm of non-blind image restoration can effectively reduce the ringing effect and exhibit the state-of-the-art deblurring results.

  11. Minimal entropy approximation for cellular automata

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk

    2014-02-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.

  12. Decoding fMRI events in sensorimotor motor network using sparse paradigm free mapping and activation likelihood estimates.

    PubMed

    Tan, Francisca M; Caballero-Gaudes, César; Mullinger, Karen J; Cho, Siu-Yeung; Zhang, Yaping; Dryden, Ian L; Francis, Susan T; Gowland, Penny A

    2017-11-01

    Most functional MRI (fMRI) studies map task-driven brain activity using a block or event-related paradigm. Sparse paradigm free mapping (SPFM) can detect the onset and spatial distribution of BOLD events in the brain without prior timing information, but relating the detected events to brain function remains a challenge. In this study, we developed a decoding method for SPFM using a coordinate-based meta-analysis method of activation likelihood estimation (ALE). We defined meta-maps of statistically significant ALE values that correspond to types of events and calculated a summation overlap between the normalized meta-maps and SPFM maps. As a proof of concept, this framework was applied to relate SPFM-detected events in the sensorimotor network (SMN) to six motor functions (left/right fingers, left/right toes, swallowing, and eye blinks). We validated the framework using simultaneous electromyography (EMG)-fMRI experiments and motor tasks with short and long duration, and random interstimulus interval. The decoding scores were considerably lower for eye movements relative to other movement types tested. The average successful rate for short and long motor events were 77 ± 13% and 74 ± 16%, respectively, excluding eye movements. We found good agreement between the decoding results and EMG for most events and subjects, with a range in sensitivity between 55% and 100%, excluding eye movements. The proposed method was then used to classify the movement types of spontaneous single-trial events in the SMN during resting state, which produced an average successful rate of 22 ± 12%. Finally, this article discusses methodological implications and improvements to increase the decoding performance. Hum Brain Mapp 38:5778-5794, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Decoding fMRI events in Sensorimotor Motor Network using Sparse Paradigm Free Mapping and Activation Likelihood Estimates

    PubMed Central

    Tan, Francisca M.; Caballero-Gaudes, César; Mullinger, Karen J.; Cho, Siu-Yeung; Zhang, Yaping; Dryden, Ian L.; Francis, Susan T.; Gowland, Penny A.

    2017-01-01

    Most fMRI studies map task-driven brain activity using a block or event-related paradigm. Sparse Paradigm Free Mapping (SPFM) can detect the onset and spatial distribution of BOLD events in the brain without prior timing information; but relating the detected events to brain function remains a challenge. In this study, we developed a decoding method for SPFM using a coordinate-based meta-analysis method of Activation Likelihood Estimation (ALE). We defined meta-maps of statistically significant ALE values that correspond to types of events and calculated a summation overlap between the normalized meta-maps and SPFM maps. As a proof of concept, this framework was applied to relate SPFM-detected events in the Sensorimotor Network (SMN) to six motor function (left/right fingers, left/right toes, swallowing and eye blinks). We validated the framework using simultaneous Electromyography-fMRI experiments and motor tasks with short and long duration, and random inter-stimulus interval. The decoding scores were considerably lower for eye movements relative to other movement types tested. The average successful rate for short and long motor events was 77 ± 13% and 74 ± 16% respectively, excluding eye movements. We found good agreement between the decoding results and EMG for most events and subjects, with a range in sensitivity between 55 and 100%, excluding eye movements. The proposed method was then used to classify the movement types of spontaneous single-trial events in the SMN during resting state, which produced an average successful rate of 22 ± 12%. Finally, this paper discusses methodological implications and improvements to increase the decoding performance. PMID:28815863

  14. Empirically Guided Coordination of Multiple Evidence-Based Treatments: An Illustration of Relevance Mapping in Children's Mental Health Services

    ERIC Educational Resources Information Center

    Chorpita, Bruce F.; Bernstein, Adam; Daleiden, Eric L.

    2011-01-01

    Objective: Despite substantial progress in the development and identification of psychosocial evidence-based treatments (EBTs) in mental health, there is minimal empirical guidance for selecting an optimal "set" of EBTs maximally applicable and generalizable to a chosen service sample. Relevance mapping is a proposed methodology that…

  15. Orienteering Map and Compass: A Guide and Outline to Its Science and Practice.

    ERIC Educational Resources Information Center

    Campbell, Mel; Burton, VirLynn

    This orienteering manual is used to teach map and compass skills to elementary school students on an overnight outdoor experience administered by volunteers. Although the experience is aimed at elementary students, student teachers have the opportunity to participate as instructors. After a few words to the volunteers on maximizing learning among…

  16. Applying six classifiers to airborne hyperspectral imagery for detecting giant reed

    USDA-ARS?s Scientific Manuscript database

    This study evaluated and compared six different image classifiers, including minimum distance (MD), Mahalanobis distance (MAHD), maximum likelihood (ML), spectral angle mapper (SAM), mixture tuned matched filtering (MTMF) and support vector machine (SVM), for detecting and mapping giant reed (Arundo...

  17. Multi-point laser ignition device

    DOEpatents

    McIntyre, Dustin L.; Woodruff, Steven D.

    2017-01-17

    A multi-point laser device comprising a plurality of optical pumping sources. Each optical pumping source is configured to create pumping excitation energy along a corresponding optical path directed through a high-reflectivity mirror and into substantially different locations within the laser media thereby producing atomic optical emissions at substantially different locations within the laser media and directed along a corresponding optical path of the optical pumping source. An output coupler and one or more output lenses are configured to produce a plurality of lasing events at substantially different times, locations or a combination thereof from the multiple atomic optical emissions produced at substantially different locations within the laser media. The laser media is a single continuous media, preferably grown on a single substrate.

  18. Multipoint vibrometry with dynamic and static holograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haist, T.; Lingel, C.; Osten, W.

    2013-12-15

    We report on two multipoint vibrometers with user-adjustable position of the measurement spots. Both systems are using holograms for beam deflection. The measurement is based on heterodyne interferometry with a frequency difference of 5 MHz between reference and object beam. One of the systems uses programmable positioning of the spots in the object volume but is limited concerning the light efficiency. The other system is based on static holograms in combination with mechanical adjustment of the measurement spots and does not have such a general efficiency restriction. Design considerations are given and we show measurement results for both systems. Inmore » addition, we analyze the sensitivity of the systems which is a major limitation compared to single point scanning systems.« less

  19. Development of a Novel Two Dimensional Surface Plasmon Resonance Sensor Using Multiplied Beam Splitting Optics

    PubMed Central

    Hemmi, Akihide; Mizumura, Ryosuke; Kawanishi, Ryuta; Nakajima, Hizuru; Zeng, Hulie; Uchiyama, Katsumi; Kaneki, Noriaki; Imato, Toshihiko

    2013-01-01

    A novel two dimensional surface plasmon resonance (SPR) sensor system with a multi-point sensing region is described. The use of multiplied beam splitting optics, as a core technology, permitted multi-point sensing to be achieved. This system was capable of simultaneously measuring nine sensing points. Calibration curves for sucrose obtained on nine sensing points were linear in the range of 0–10% with a correlation factor of 0.996–0.998 with a relative standard deviation of 0.090–4.0%. The detection limits defined as S/N = 3 were 1.98 × 10−6–3.91 × 10−5 RIU. This sensitivity is comparable to that of conventional SPR sensors. PMID:23299626

  20. Uncertainty evaluation in normalization of isotope delta measurement results against international reference materials.

    PubMed

    Meija, Juris; Chartrand, Michelle M G

    2018-01-01

    Isotope delta measurements are normalized against international reference standards. Although multi-point normalization is becoming a standard practice, the existing uncertainty evaluation practices are either undocumented or are incomplete. For multi-point normalization, we present errors-in-variables regression models for explicit accounting of the measurement uncertainty of the international standards along with the uncertainty that is attributed to their assigned values. This manuscript presents framework to account for the uncertainty that arises due to a small number of replicate measurements and discusses multi-laboratory data reduction while accounting for inevitable correlations between the laboratories due to the use of identical reference materials for calibration. Both frequentist and Bayesian methods of uncertainty analysis are discussed.

  1. Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target

    PubMed Central

    Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji

    2009-01-01

    In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326

  2. Maximal Aerobic Power in Aging Men: Insights From a Record of 1-Hour Unaccompanied Cycling.

    PubMed

    Capelli, Carlo

    2018-01-01

    To analyze best 1-h unaccompanied performances of master athletes in ages ranging from 35 to 105 y to estimate the decay of maximal aerobic power (MAP) across the spectrum of age. MAP at the various ages was estimated by computing the metabolic power ([Formula: see text]) maintained to cover the distances during best 1-h unaccompanied performances established by master athletes of different classes of age and by assuming that they were able to maintain an [Formula: see text] equal to 88% of their MAP during 1 h of exhaustive exercise. MAP started monotonically decreasing at 47 y of age. Thereafter, it showed an average rate of decrease of ∼14% for the decades up to 105 y of age, similar to other classes of master athletes. The results confirm, by extending the analysis to centennial subjects, that MAP seems to start declining from the middle of the 5th decade of age, with an average percentage decay that is faster than that traditionally reported, even when one maintains a very active lifestyle. The proposed approach may be applied to other types of human locomotion for which the relationship between speed and [Formula: see text] is known.

  3. Endpoint regularity of discrete multisublinear fractional maximal operators associated with [Formula: see text]-balls.

    PubMed

    Liu, Feng

    2018-01-01

    In this paper we investigate the endpoint regularity of the discrete m -sublinear fractional maximal operator associated with [Formula: see text]-balls, both in the centered and uncentered versions. We show that these operators map [Formula: see text] into [Formula: see text] boundedly and continuously. Here [Formula: see text] represents the set of functions of bounded variation defined on [Formula: see text].

  4. An Integrated Tone Mapping for High Dynamic Range Image Visualization

    NASA Astrophysics Data System (ADS)

    Liang, Lei; Pan, Jeng-Shyang; Zhuang, Yongjun

    2018-01-01

    There are two type tone mapping operators for high dynamic range (HDR) image visualization. HDR image mapped by perceptual operators have strong sense of reality, but will lose local details. Empirical operators can maximize local detail information of HDR image, but realism is not strong. A common tone mapping operator suitable for all applications is not available. This paper proposes a novel integrated tone mapping framework which can achieve conversion between empirical operators and perceptual operators. In this framework, the empirical operator is rendered based on improved saliency map, which simulates the visual attention mechanism of the human eye to the natural scene. The results of objective evaluation prove the effectiveness of the proposed solution.

  5. Investigations of turbulent scalar fields using probability density function approach

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1991-01-01

    Scalar fields undergoing random advection have attracted much attention from researchers in both the theoretical and practical sectors. Research interest spans from the study of the small scale structures of turbulent scalar fields to the modeling and simulations of turbulent reacting flows. The probability density function (PDF) method is an effective tool in the study of turbulent scalar fields, especially for those which involve chemical reactions. It has been argued that a one-point, joint PDF approach is the one to choose from among many simulation and closure methods for turbulent combustion and chemically reacting flows based on its practical feasibility in the foreseeable future for multiple reactants. Instead of the multi-point PDF, the joint PDF of a scalar and its gradient which represents the roles of both scalar and scalar diffusion is introduced. A proper closure model for the molecular diffusion term in the PDF equation is investigated. Another direction in this research is to study the mapping closure method that has been recently proposed to deal with the PDF's in turbulent fields. This method seems to have captured the physics correctly when applied to diffusion problems. However, if the turbulent stretching is included, the amplitude mapping has to be supplemented by either adjusting the parameters representing turbulent stretching at each time step or by introducing the coordinate mapping. This technique is still under development and seems to be quite promising. The final objective of this project is to understand some fundamental properties of the turbulent scalar fields and to develop practical numerical schemes that are capable of handling turbulent reacting flows.

  6. Cardiac tissue slices: preparation, handling, and successful optical mapping.

    PubMed

    Wang, Ken; Lee, Peter; Mirams, Gary R; Sarathchandra, Padmini; Borg, Thomas K; Gavaghan, David J; Kohl, Peter; Bollensdorff, Christian

    2015-05-01

    Cardiac tissue slices are becoming increasingly popular as a model system for cardiac electrophysiology and pharmacology research and development. Here, we describe in detail the preparation, handling, and optical mapping of transmembrane potential and intracellular free calcium concentration transients (CaT) in ventricular tissue slices from guinea pigs and rabbits. Slices cut in the epicardium-tangential plane contained well-aligned in-slice myocardial cell strands ("fibers") in subepicardial and midmyocardial sections. Cut with a high-precision slow-advancing microtome at a thickness of 350 to 400 μm, tissue slices preserved essential action potential (AP) properties of the precutting Langendorff-perfused heart. We identified the need for a postcutting recovery period of 36 min (guinea pig) and 63 min (rabbit) to reach 97.5% of final steady-state values for AP duration (APD) (identified by exponential fitting). There was no significant difference between the postcutting recovery dynamics in slices obtained using 2,3-butanedione 2-monoxime or blebistatin as electromechanical uncouplers during the cutting process. A rapid increase in APD, seen after cutting, was caused by exposure to ice-cold solution during the slicing procedure, not by tissue injury, differences in uncouplers, or pH-buffers (bicarbonate; HEPES). To characterize intrinsic patterns of CaT, AP, and conduction, a combination of multipoint and field stimulation should be used to avoid misinterpretation based on source-sink effects. In summary, we describe in detail the preparation, mapping, and data analysis approaches for reproducible cardiac tissue slice-based investigations into AP and CaT dynamics. Copyright © 2015 the American Physiological Society.

  7. Cardiac tissue slices: preparation, handling, and successful optical mapping

    PubMed Central

    Wang, Ken; Lee, Peter; Mirams, Gary R.; Sarathchandra, Padmini; Borg, Thomas K.; Gavaghan, David J.; Kohl, Peter

    2015-01-01

    Cardiac tissue slices are becoming increasingly popular as a model system for cardiac electrophysiology and pharmacology research and development. Here, we describe in detail the preparation, handling, and optical mapping of transmembrane potential and intracellular free calcium concentration transients (CaT) in ventricular tissue slices from guinea pigs and rabbits. Slices cut in the epicardium-tangential plane contained well-aligned in-slice myocardial cell strands (“fibers”) in subepicardial and midmyocardial sections. Cut with a high-precision slow-advancing microtome at a thickness of 350 to 400 μm, tissue slices preserved essential action potential (AP) properties of the precutting Langendorff-perfused heart. We identified the need for a postcutting recovery period of 36 min (guinea pig) and 63 min (rabbit) to reach 97.5% of final steady-state values for AP duration (APD) (identified by exponential fitting). There was no significant difference between the postcutting recovery dynamics in slices obtained using 2,3-butanedione 2-monoxime or blebistatin as electromechanical uncouplers during the cutting process. A rapid increase in APD, seen after cutting, was caused by exposure to ice-cold solution during the slicing procedure, not by tissue injury, differences in uncouplers, or pH-buffers (bicarbonate; HEPES). To characterize intrinsic patterns of CaT, AP, and conduction, a combination of multipoint and field stimulation should be used to avoid misinterpretation based on source-sink effects. In summary, we describe in detail the preparation, mapping, and data analysis approaches for reproducible cardiac tissue slice-based investigations into AP and CaT dynamics. PMID:25595366

  8. GeolOkit 1.0: a new Open Source, Cross-Platform software for geological data visualization in Google Earth environment

    NASA Astrophysics Data System (ADS)

    Triantafyllou, Antoine; Bastin, Christophe; Watlet, Arnaud

    2016-04-01

    GIS software suites are today's essential tools to gather and visualise geological data, to apply spatial and temporal analysis and in fine, to create and share interactive maps for further geosciences' investigations. For these purposes, we developed GeolOkit: an open-source, freeware and lightweight software, written in Python, a high-level, cross-platform programming language. GeolOkit software is accessible through a graphical user interface, designed to run in parallel with Google Earth. It is a super user-friendly toolbox that allows 'geo-users' to import their raw data (e.g. GPS, sample locations, structural data, field pictures, maps), to use fast data analysis tools and to plot these one into Google Earth environment using KML code. This workflow requires no need of any third party software, except Google Earth itself. GeolOkit comes with large number of geosciences' labels, symbols, colours and placemarks and may process : (i) multi-points data, (ii) contours via several interpolations methods, (iii) discrete planar and linear structural data in 2D or 3D supporting large range of structures input format, (iv) clustered stereonets and rose diagram, (v) drawn cross-sections as vertical sections, (vi) georeferenced maps and vectors, (vii) field pictures using either geo-tracking metadata from a camera built-in GPS module, or the same-day track of an external GPS. We are looking for you to discover all the functionalities of GeolOkit software. As this project is under development, we are definitely looking to discussions regarding your proper needs, your ideas and contributions to GeolOkit project.

  9. Locus-specific oligonucleotide probes increase the usefulness of inter-Alu polymorphisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarnik, M.; Tang, J.Q.; Korab-Laskowska, M.

    1994-09-01

    Most of the mapping approaches are based on single-locus codominant markers of known location. Their multiplex ratio, defined as the number of loci that can be simultaneously tested, is typically one. An increased multiplex ratio was obtained by typing anonymous polymorphisms using PCR primers anchored in ubiquitous Alu-repeats. These so called alumorphs are revealed by inter-Alu-PCR and seen as the presence or absence of an amplified band of a given length. We decided to map alumorphs and to develop locus-specific oligonucleotide (LSO) probes to facilitate their use and transfer among different laboratories. We studied the segregation of alumorphs in eightmore » CEPH families, using two distinct Alu-primers, both directing PCR between the repeats in a tail-to-tail orientation. The segregating bands were assigned to chromosomal locations by two-point linkage analysis with CEPH markers (V6.0). They were excised from dried gels, reamplified, cloned and sequenced. The resulting LSOs were used as hybridization probes (i) to confirm chromosomal assignments in a human/hamster somatic cell hybrid panel, and (ii) to group certain allelic length variants, originally coded as separate dominant markres, into more informative codominant loci. These codominants were then placed by multipoint analysis on a microsatellite Genethon map. Finally, the LSO probes were used as polymorphic STSs, to identify by hybridization the corresponding markers among products of inter-Alu-PCR. The use of LSOs converts alumorphs into a system of non-anonymous, often multiallelic codominant markes which can be simultaneously typed, thus achieving the goal of high multiplex ratio.« less

  10. Chemical landscape analysis with the OpenTox framework.

    PubMed

    Jeliazkova, Nina; Jeliazkov, Vedrin

    2012-01-01

    The Structure-Activity Relationships (SAR) landscape and activity cliffs concepts have their origins in medicinal chemistry and receptor-ligand interactions modelling. While intuitive, the definition of an activity cliff as a "pair of structurally similar compounds with large differences in potency" is commonly recognized as ambiguous. This paper proposes a new and efficient method for identifying activity cliffs and visualization of activity landscapes. The activity cliffs definition could be improved to reflect not the cliff steepness alone, but also the rate of the change of the steepness. The method requires explicitly setting similarity and activity difference thresholds, but provides means to explore multiple thresholds and to visualize in a single map how the thresholds affect the activity cliff identification. The identification of the activity cliffs is addressed by reformulating the problem as a statistical one, by introducing a probabilistic measure, namely, calculating the likelihood of a compound having large activity difference compared to other compounds, while being highly similar to them. The likelihood is effectively a quantification of a SAS Map with defined thresholds. Calculating the likelihood relies on four counts only, and does not require the pairwise matrix storage. This is a significant advantage, especially when processing large datasets. The method generates a list of individual compounds, ranked according to the likelihood of their involvement in the formation of activity cliffs, and goes beyond characterizing cliffs by structure pairs only. The visualisation is implemented by considering the activity plane fixed and analysing the irregularities of the similarity itself. It provides a convenient analogy to a topographic map and may help identifying the most appropriate similarity representation for each specific SAR space. The proposed method has been applied to several datasets, representing different biological activities. Finally, the method is implemented as part of an existing open source Ambit package and could be accessed via an OpenTox API compliant web service and via an interactive application, running within a modern, JavaScript enabled web browser. Combined with the functionalities already offered by the OpenTox framework, like data sharing and remote calculations, it could be a useful tool for exploring chemical landscapes online.

  11. The Buccaneer software for automated model building. 1. Tracing protein chains.

    PubMed

    Cowtan, Kevin

    2006-09-01

    A new technique for the automated tracing of protein chains in experimental electron-density maps is described. The technique relies on the repeated application of an oriented electron-density likelihood target function to identify likely C(alpha) positions. This function is applied both in the location of a few promising ;seed' positions in the map and to grow those initial C(alpha) positions into extended chain fragments. Techniques for assembling the chain fragments into an initial chain trace are discussed.

  12. Geological mapping in northwestern Saudi Arabia using LANDSAT multispectral techniques

    NASA Technical Reports Server (NTRS)

    Blodget, H. W.; Brown, G. F.; Moik, J. G.

    1975-01-01

    Various computer enhancement and data extraction systems using LANDSAT data were assessed and used to complement a continuing geologic mapping program. Interactive digital classification techniques using both the parallel-piped and maximum-likelihood statistical approaches achieve very limited success in areas of highly dissected terrain. Computer enhanced imagery developed by color compositing stretched MSS ratio data was constructed for a test site in northwestern Saudi Arabia. Initial results indicate that several igneous and sedimentary rock types can be discriminated.

  13. Mapping soil types from multispectral scanner data.

    NASA Technical Reports Server (NTRS)

    Kristof, S. J.; Zachary, A. L.

    1971-01-01

    Multispectral remote sensing and computer-implemented pattern recognition techniques were used for automatic ?mapping' of soil types. This approach involves subjective selection of a set of reference samples from a gray-level display of spectral variations which was generated by a computer. Each resolution element is then classified using a maximum likelihood ratio. Output is a computer printout on which the researcher assigns a different symbol to each class. Four soil test areas in Indiana were experimentally examined using this approach, and partially successful results were obtained.

  14. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  15. Feature Statistics Modulate the Activation of Meaning during Spoken Word Processing

    ERIC Educational Resources Information Center

    Devereux, Barry J.; Taylor, Kirsten I.; Randall, Billi; Geertzen, Jeroen; Tyler, Lorraine K.

    2016-01-01

    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in ("distinctiveness/sharedness") and likelihood of co-occurrence ("correlational…

  16. Localizing multiple X chromosome-linked retinitis pigmentosa loci using multilocus homogeneity tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ott, J.; Terwilliger, J.D.; Bhattacharya, S.

    1990-01-01

    Multilocus linkage analysis of 62 family pedigrees with X chromosome-linked retinitis pigmentosa (XLRP) was undertaken to determine the presence of possible multiple disease loci and to reliability estimate their map location. Multilocus homogeneity tests furnish convincing evidence for the presence of two XLRP loci, the likelihood ratio being 6.4 {times} 10{sup 9}:1 in a favor of two versus a single XLRP locus and gave accurate estimates for their map location. In 60-75% of the families, location of an XLRP gene was estimated at 1 centimorgan distal to OTC, and in 25-40% of the families, an XLRP locus was located halfwaymore » between DXS14 (p58-1) and DXZ1 (Xcen), with an estimated recombination fraction of 25% between the two XLRP loci. There is also good evidence for third XLRP locus, midway between DXS28 (C7) and DXS164 (pERT87), supported by a likelihood ratio of 293:1 for three versus two XLRP loci.« less

  17. Using variable rate models to identify genes under selection in sequence pairs: their validity and limitations for EST sequences.

    PubMed

    Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H

    2007-02-01

    Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.

  18. Model-Based Clustering of Regression Time Series Data via APECM -- An AECM Algorithm Sung to an Even Faster Beat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Wei-Chen; Maitra, Ranjan

    2011-01-01

    We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less

  19. A heuristic multi-criteria classification approach incorporating data quality information for choropleth mapping

    PubMed Central

    Sun, Min; Wong, David; Kronenfeld, Barry

    2016-01-01

    Despite conceptual and technology advancements in cartography over the decades, choropleth map design and classification fail to address a fundamental issue: estimates that are statistically indifferent may be assigned to different classes on maps or vice versa. Recently, the class separability concept was introduced as a map classification criterion to evaluate the likelihood that estimates in two classes are statistical different. Unfortunately, choropleth maps created according to the separability criterion usually have highly unbalanced classes. To produce reasonably separable but more balanced classes, we propose a heuristic classification approach to consider not just the class separability criterion but also other classification criteria such as evenness and intra-class variability. A geovisual-analytic package was developed to support the heuristic mapping process to evaluate the trade-off between relevant criteria and to select the most preferable classification. Class break values can be adjusted to improve the performance of a classification. PMID:28286426

  20. Audio Tracking in Noisy Environments by Acoustic Map and Spectral Signature.

    PubMed

    Crocco, Marco; Martelli, Samuele; Trucco, Andrea; Zunino, Andrea; Murino, Vittorio

    2018-05-01

    A novel method is proposed for generic target tracking by audio measurements from a microphone array. To cope with noisy environments characterized by persistent and high energy interfering sources, a classification map (CM) based on spectral signatures is calculated by means of a machine learning algorithm. Next, the CM is combined with the acoustic map, describing the spatial distribution of sound energy, in order to obtain a cleaned joint map in which contributions from the disturbing sources are removed. A likelihood function is derived from this map and fed to a particle filter yielding the target location estimation on the acoustic image. The method is tested on two real environments, addressing both speaker and vehicle tracking. The comparison with a couple of trackers, relying on the acoustic map only, shows a sharp improvement in performance, paving the way to the application of audio tracking in real challenging environments.

  1. Self-Organizing Hidden Markov Model Map (SOHMMM): Biological Sequence Clustering and Cluster Visualization.

    PubMed

    Ferles, Christos; Beaufort, William-Scott; Ferle, Vanessa

    2017-01-01

    The present study devises mapping methodologies and projection techniques that visualize and demonstrate biological sequence data clustering results. The Sequence Data Density Display (SDDD) and Sequence Likelihood Projection (SLP) visualizations represent the input symbolical sequences in a lower-dimensional space in such a way that the clusters and relations of data elements are depicted graphically. Both operate in combination/synergy with the Self-Organizing Hidden Markov Model Map (SOHMMM). The resulting unified framework is in position to analyze automatically and directly raw sequence data. This analysis is carried out with little, or even complete absence of, prior information/domain knowledge.

  2. Are H-reflex and M-wave recruitment curve parameters related to aerobic capacity?

    PubMed

    Piscione, Julien; Grosset, Jean-François; Gamet, Didier; Pérot, Chantal

    2012-10-01

    Soleus Hoffmann reflex (H-reflex) amplitude is affected by a training period and type and level of training are also well known to modify aerobic capacities. Previously, paired changes in H-reflex and aerobic capacity have been evidenced after endurance training. The aim of this study was to investigate possible links between H- and M-recruitment curve parameters and aerobic capacity collected on a cohort of subjects (56 young men) that were not involved in regular physical training. Maximal H-reflex normalized with respect to maximal M-wave (H(max)/M(max)) was measured as well as other parameters of the H- or M-recruitment curves that provide information about the reflex or direct excitability of the motoneuron pool, such as thresholds of stimulus intensity to obtain H or M response (H(th) and M(th)), the ascending slope of H-reflex, or M-wave recruitment curves (H(slp) and M(slp)) and their ratio (H(slp)/M(slp)). Aerobic capacity, i.e., maximal oxygen consumption and maximal aerobic power (MAP) were, respectively, estimated from a running field test and from an incremental test on a cycle ergometer. Maximal oxygen consumption was only correlated with M(slp), an indicator of muscle fiber heterogeneity (p < 0.05), whereas MAP was not correlated with any of the tested parameters (p > 0.05). Although higher H-reflex are often described for subjects with a high aerobic capacity because of endurance training, at a basic level (i.e., without training period context) no correlation was observed between maximal H-reflex and aerobic capacity. Thus, none of the H-reflex or M-wave recruitment curve parameters, except M(slp), was related to the aerobic capacity of young, untrained male subjects.

  3. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  4. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  5. Improving z-tracking accuracy in the two-photon single-particle tracking microscope.

    PubMed

    Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C

    2015-10-12

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.

  6. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  7. Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma

    NASA Astrophysics Data System (ADS)

    Seibert, Stanley; Latorre, Anthony

    2012-03-01

    We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.

  8. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C.; Liu, Y.-L.; Perillo, E. P.

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less

  9. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    PubMed

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  10. Road Map for Development of Crystal-Tolerant High Level Waste Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matyas, Josef; Vienna, John D.; Peeler, David

    This road map guides the research and development for formulation and processing of crystal-tolerant glasses, identifying near- and long-term activities that need to be completed over the period from 2014 to 2019. The primary objective is to maximize waste loading for Hanford waste glasses without jeopardizing melter operation by crystal accumulation in the melter or melter discharge riser. The potential applicability to the Savannah River Site (SRS) Defense Waste Processing Facility (DWPF) is also addressed in this road map.

  11. A path to integration in an academic health science center.

    PubMed Central

    Panko, W. B.; Wilson, W.

    1992-01-01

    This article describes a networking and integration strategy in use at the University of Michigan Medical Center. This strategy builds upon the existing technology base and is designed to provide a roadmap that will direct short-term development along a productive, long-term path. It offers a way to permit the short-term development of incremental solutions to current problems while at the same time maximizing the likelihood that these incremental efforts can be recycled into a more comprehensive approach. PMID:1336413

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, David A.; Cournoyer, Michael E.; Merhege, James F.

    Criticality is the state of a nuclear chain reacting medium when the chain reaction is just self-sustaining (or critical). Criticality is dependent on nine interrelated parameters. Moreover, we design criticality safety controls in order to constrain these parameters to minimize fissions and maximize neutron leakage and absorption in other materials, which makes criticality more difficult or impossible to achieve. We present the consequences of criticality accidents are discussed, the nine interrelated parameters that combine to affect criticality are described, and criticality safety controls used to minimize the likelihood of a criticality accident are presented.

  13. Some approaches to optimal cluster labeling of aerospace imagery

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1980-01-01

    Some approaches are presented to the problem of labeling clusters using information from a given set of labeled and unlabeled aerospace imagery patterns. The assignment of class labels to the clusters is formulated as the determination of the best assignment over all possible ones with respect to some criterion. Cluster labeling is also viewed as the probability of correct labeling with a maximization of likelihood function. Results of the application of these techniques in the processing of remotely sensed multispectral scanner imagery data are presented.

  14. Optimizing your options: Extracting the full economic value of transmission when planning under uncertainty

    DOE PAGES

    Munoz, Francisco D.; Watson, Jean -Paul; Hobbs, Benjamin F.

    2015-06-04

    In this study, the anticipated magnitude of needed investments in new transmission infrastructure in the U.S. requires that these be allocated in a way that maximizes the likelihood of achieving society's goals for power system operation. The use of state-of-the-art optimization tools can identify cost-effective investment alternatives, extract more benefits out of transmission expansion portfolios, and account for the huge economic, technology, and policy uncertainties that the power sector faces over the next several decades.

  15. Deuterated scintillators and their application to neutron spectroscopy

    NASA Astrophysics Data System (ADS)

    Febbraro, M.; Lawrence, C. C.; Zhu, H.; Pierson, B.; Torres-Isea, R. O.; Becchetti, F. D.; Kolata, J. J.; Riggins, J.

    2015-06-01

    Deuterated scintillators have been used as a tool for neutron spectroscopy without Neutron Time-of-Flight (n-ToF) for more than 30 years. This article will provide a brief historical overview of the technique and current uses of deuterated scintillators in the UM-DSA and DESCANT arrays. Pulse-shape discrimination and spectrum unfolding with the maximum-likelihood expectation maximization algorithm will be discussed. Experimental unfolding and cross section results from measurements of (d,n), (3He,n) and (α,n) reactions are shown.

  16. Boa constrictor (Boa constrictor): foraging behavior

    USGS Publications Warehouse

    Sorrell, G.G.; Boback, M.S.; Reed, R.N.; Green, S.; Montgomery, Chad E.; DeSouza, L.S.; Chiaraviglio, M.

    2011-01-01

    Boa constrictor is often referred to as a sit-and-wait or ambush forager that chooses locations to maximize the likelihood of prey encounters (Greene 1983. In Janzen [ed.], Costa Rica Natural History, pp. 380-382. Univ. Chicago Press, Illinois). However, as more is learned about the natural history of snakes in general, the dichotomy between active versus ambush foraging is becoming blurred. Herein, we describe an instance of diurnal active foraging by a B. constrictor, illustrating that this species exhibits a range of foraging behaviors.

  17. Estimating cellular parameters through optimization procedures: elementary principles and applications.

    PubMed

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  18. Multiple object tracking with non-unique data-to-object association via generalized hypothesis testing. [tracking several aircraft near each other or ships at sea

    NASA Technical Reports Server (NTRS)

    Porter, D. W.; Lefler, R. M.

    1979-01-01

    A generalized hypothesis testing approach is applied to the problem of tracking several objects where several different associations of data with objects are possible. Such problems occur, for instance, when attempting to distinctly track several aircraft maneuvering near each other or when tracking ships at sea. Conceptually, the problem is solved by first, associating data with objects in a statistically reasonable fashion and then, tracking with a bank of Kalman filters. The objects are assumed to have motion characterized by a fixed but unknown deterministic portion plus a random process portion modeled by a shaping filter. For example, the object might be assumed to have a mean straight line path about which it maneuvers in a random manner. Several hypothesized associations of data with objects are possible because of ambiguity as to which object the data comes from, false alarm/detection errors, and possible uncertainty in the number of objects being tracked. The statistical likelihood function is computed for each possible hypothesized association of data with objects. Then the generalized likelihood is computed by maximizing the likelihood over parameters that define the deterministic motion of the object.

  19. Semiparametric time-to-event modeling in the presence of a latent progression event.

    PubMed

    Rice, John D; Tsodikov, Alex

    2017-06-01

    In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood-based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. © 2016, The International Biometric Society.

  20. Retrospective analysis: Conservative treatment of placenta increta with methotrexate.

    PubMed

    Zhang, Chunhua; Li, Hongyan; Zuo, Changting; Wang, Xietong

    2018-05-01

    To evaluate the efficacy of conservative treatment with methotrexate against placenta increta by two different routes of administration through retrospective analysis. A total of 54 women diagnosed with placenta increta after vaginal delivery were enrolled in this retrospective study. The participants accepted conservative management with methotrexate through either intravenous injection or local multi-point injection under ultrasound guidance. The treatment was considered effective if no hysterectomy was mandatory during the follow-up period. Out of the 54 cases, 21 patients were treated with methotrexate intravenously (group 1), and 33 patients received local multi-point injection to the placenta increta under ultrasound guidance (group 2). No maternal death occurred. In group 1, 10 patients expelled the placentas spontaneously, 7 patients underwent uterine curettage and 4 patients underwent hysterectomy for uncontrollable post-partum hemorrhage and infection. In group 2, 25 patients expelled placentas spontaneously and 8 patients underwent uterine curettage with no incidence of hysterectomy. The success rate in group 1 and group 2 was 17/21 and 33/33, respectively. The average time of the spontaneous placenta expulsion was 79.13 ± 29.87 days in group 1 and 42.42 ± 31.83 days in group 2. Local multi-point methotrexate injection under ultrasound guidance is a better alternative for patients with placenta increta, especially for preserving fertility. © 2018 Japan Society of Obstetrics and Gynecology.

  1. Modeling Leadership Styles in Human-Robot Team Dynamics

    NASA Technical Reports Server (NTRS)

    Cruz, Gerardo E.

    2005-01-01

    The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.

  2. Planning applications in East Central Florida

    NASA Technical Reports Server (NTRS)

    Hannah, J. W. (Principal Investigator); Thomas, G. L.; Esparza, F.; Millard, J. J.

    1974-01-01

    The author has identified the following significant results. This is a study of applications of ERTS data to planning problems, especially as applicable to East Central Florida. The primary method has been computer analysis of digital data, with visual analysis of images serving to supplement the digital analysis. The principal method of analysis was supervised maximum likelihood classification, supplemented by density slicing and mapping of ratios of band intensities. Land-use maps have been prepared for several urban and non-urban sectors. Thematic maps have been found to be a useful form of the land-use maps. Change-monitoring has been found to be an appropriate and useful application. Mapping of marsh regions has been found effective and useful in this region. Local planners have participated in selecting training samples and in the checking and interpretation of results.

  3. Bi-orthogonal Symbol Mapping and Detection in Optical CDMA Communication System

    NASA Astrophysics Data System (ADS)

    Liu, Maw-Yang

    2017-12-01

    In this paper, the bi-orthogonal symbol mapping and detection scheme is investigated in time-spreading wavelength-hopping optical CDMA communication system. The carrier-hopping prime code is exploited as signature sequence, whose put-of-phase autocorrelation is zero. Based on the orthogonality of carrier-hopping prime code, the equal weight orthogonal signaling scheme can be constructed, and the proposed scheme using bi-orthogonal symbol mapping and detection can be developed. The transmitted binary data bits are mapped into corresponding bi-orthogonal symbols, where the orthogonal matrix code and its complement are utilized. In the receiver, the received bi-orthogonal data symbol is fed into the maximum likelihood decoder for detection. Under such symbol mapping and detection, the proposed scheme can greatly enlarge the Euclidean distance; hence, the system performance can be drastically improved.

  4. Compound Poisson Law for Hitting Times to Periodic Orbits in Two-Dimensional Hyperbolic Systems

    NASA Astrophysics Data System (ADS)

    Carney, Meagan; Nicol, Matthew; Zhang, Hong-Kun

    2017-11-01

    We show that a compound Poisson distribution holds for scaled exceedances of observables φ uniquely maximized at a periodic point ζ in a variety of two-dimensional hyperbolic dynamical systems with singularities (M,T,μ ), including the billiard maps of Sinai dispersing billiards in both the finite and infinite horizon case. The observable we consider is of form φ (z)=-ln d(z,ζ ) where d is a metric defined in terms of the stable and unstable foliation. The compound Poisson process we obtain is a Pólya-Aeppli distibution of index θ . We calculate θ in terms of the derivative of the map T. Furthermore if we define M_n=\\max {φ ,\\ldots ,φ circ T^n} and u_n (τ ) by \\lim _{n→ ∞} nμ (φ >u_n (τ ) )=τ the maximal process satisfies an extreme value law of form μ (M_n ≤ u_n)=e^{-θ τ }. These results generalize to a broader class of functions maximized at ζ , though the formulas regarding the parameters in the distribution need to be modified.

  5. Radial k-t SPIRiT: autocalibrated parallel imaging for generalized phase-contrast MRI.

    PubMed

    Santelli, Claudio; Schaeffter, Tobias; Kozerke, Sebastian

    2014-11-01

    To extend SPIRiT to additionally exploit temporal correlations for highly accelerated generalized phase-contrast MRI and to compare the performance of the proposed radial k-t SPIRiT method relative to frame-by-frame SPIRiT and radial k-t GRAPPA reconstruction for velocity and turbulence mapping in the aortic arch. Free-breathing navigator-gated two-dimensional radial cine imaging with three-directional multi-point velocity encoding was implemented and fully sampled data were obtained in the aortic arch of healthy volunteers. Velocities were encoded with three different first gradient moments per axis to permit quantification of mean velocity and turbulent kinetic energy. Velocity and turbulent kinetic energy maps from up to 14-fold undersampled data were compared for k-t SPIRiT, frame-by-frame SPIRiT, and k-t GRAPPA relative to the fully sampled reference. Using k-t SPIRiT, improvements in magnitude and velocity reconstruction accuracy were found. Temporally resolved magnitude profiles revealed a reduction in spatial blurring with k-t SPIRiT compared with frame-by-frame SPIRiT and k-t GRAPPA for all velocity encodings, leading to improved estimates of turbulent kinetic energy. k-t SPIRiT offers improved reconstruction accuracy at high radial undersampling factors and hence facilitates the use of generalized phase-contrast MRI for routine use. Copyright © 2013 Wiley Periodicals, Inc.

  6. A novel locus for split-hand/foot malformation associated with tibial hemimelia (SHFLD syndrome) maps to chromosome region 17p13.1-17p13.3.

    PubMed

    Lezirovitz, Karina; Maestrelli, Sylvia Regina Pedrosa; Cotrim, Nelson Henderson; Otto, Paulo A; Pearson, Peter L; Mingroni-Netto, Regina Celia

    2008-07-01

    Split-hand/foot malformation (SHFM) associated with aplasia of long bones, SHFLD syndrome or Tibial hemimelia-ectrodactyly syndrome is a rare condition with autosomal dominant inheritance, reduced penetrance and an incidence estimated to be about 1 in 1,000,000 liveborns. To date, three chromosomal regions have been reported as strong candidates for harboring SHFLD syndrome genes: 1q42.2-q43, 6q14.1 and 2q14.2. We characterized the phenotype of nine affected individuals from a large family with the aim of mapping the causative gene. Among the nine affected patients, four had only SHFM of the hands and no tibial defects, three had both defects and two had only unilateral tibial hemimelia. In keeping with previous publications of this and other families, there was clear evidence of both variable expression and incomplete penetrance, the latter bearing hallmarks of anticipation. Segregation analysis and multipoint Lod scores calculations (maximum Lod score of 5.03 using the LINKMAP software) using all potentially informative family members, both affected and unaffected, identified the chromosomal region 17p13.1-17p13.3 as the best and only candidate for harboring a novel mutated gene responsible for the syndrome in this family. The candidate gene CRK located within this region was sequenced but no pathogenic mutation was detected.

  7. A multiobjective hybrid genetic algorithm for the capacitated multipoint network design problem.

    PubMed

    Lo, C C; Chang, W H

    2000-01-01

    The capacitated multipoint network design problem (CMNDP) is NP-complete. In this paper, a hybrid genetic algorithm for CMNDP is proposed. The multiobjective hybrid genetic algorithm (MOHGA) differs from other genetic algorithms (GAs) mainly in its selection procedure. The concept of subpopulation is used in MOHGA. Four subpopulations are generated according to the elitism reservation strategy, the shifting Prufer vector, the stochastic universal sampling, and the complete random method, respectively. Mixing these four subpopulations produces the next generation population. The MOHGA can effectively search the feasible solution space due to population diversity. The MOHGA has been applied to CMNDP. By examining computational and analytical results, we notice that the MOHGA can find most nondominated solutions and is much more effective and efficient than other multiobjective GAs.

  8. AVST Morphing Project Research Summaries in Fiscal Year 2001

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria R.

    2002-01-01

    The Morphing project at the National Aeronautics and Space Agency's Langley Research Center is part of the Aerospace Vehicle Systems Program Office that conducts fundamental research on advanced technologies for future flight vehicles. The objectives of the Morphing project are to develop and assess advanced technologies and integrated component concepts to enable efficient, multi-point adaptability in air and space vehicles. In the context of the project, the word "morphing" is defined as "efficient, multi-point adaptability" and may include micro or macro, structural or fluidic approaches. The current document on the Morphing project is a compilation of research summaries and other information on the project from fiscal year 2001. The focus of this document is to provide a brief overview of the project content, technical results and lessons learned from fiscal year 2001.

  9. The evaluation of multi-structure, multi-atlas pelvic anatomy features in a prostate MR lymphography CAD system

    NASA Astrophysics Data System (ADS)

    Meijs, M.; Debats, O.; Huisman, H.

    2015-03-01

    In prostate cancer, the detection of metastatic lymph nodes indicates progression from localized disease to metastasized cancer. The detection of positive lymph nodes is, however, a complex and time consuming task for experienced radiologists. Assistance of a two-stage Computer-Aided Detection (CAD) system in MR Lymphography (MRL) is not yet feasible due to the large number of false positives in the first stage of the system. By introducing a multi-structure, multi-atlas segmentation, using an affine transformation followed by a B-spline transformation for registration, the organ location is given by a mean density probability map. The atlas segmentation is semi-automatically drawn with ITK-SNAP, using Active Contour Segmentation. Each anatomic structure is identified by a label number. Registration is performed using Elastix, using Mutual Information and an Adaptive Stochastic Gradient optimization. The dataset consists of the MRL scans of ten patients, with lymph nodes manually annotated in consensus by two expert readers. The feature map of the CAD system consists of the Multi-Atlas and various other features (e.g. Normalized Intensity and multi-scale Blobness). The voxel-based Gentleboost classifier is evaluated using ROC analysis with cross validation. We show in a set of 10 studies that adding multi-structure, multi-atlas anatomical structure likelihood features improves the quality of the lymph node voxel likelihood map. Multiple structure anatomy maps may thus make MRL CAD more feasible.

  10. Using NASA Satellite Observations to Map Wildfire Risk in the United States for Allocation of Fire Management Resources

    NASA Astrophysics Data System (ADS)

    Farahmand, A.; Reager, J. T., II; Behrangi, A.; Stavros, E. N.; Randerson, J. T.

    2017-12-01

    Fires are a key disturbance globally acting as a catalyst for terrestrial ecosystem change and contributing significantly to both carbon emissions and changes in surface albedo. The socioeconomic impacts of wildfire activities are also significant with wildfire activity results in billions of dollars of losses every year. Fire size, area burned and frequency are increasing, thus the likelihood of fire danger, defined by United States National Interagency Fire Center (NFIC) as the demand of fire management resources as a function of how flammable fuels (a function of ignitability, consumability and availability) are from normal, is an important step toward reducing costs associated with wildfires. Numerous studies have aimed to predict the likelihood of fire danger, but few studies use remote sensing data to map fire danger at scales commensurate with regional management decisions (e.g., deployment of resources nationally throughout fire season with seasonal and monthly prediction). Here, we use NASA Gravity Recovery And Climate Experiment (GRACE) assimilated surface soil moisture, NASA Atmospheric Infrared Sounder (AIRS) vapor pressure deficit, NASA Moderate Resolution Imaging Spectroradiometer (MODIS) enhanced vegetation index products and landcover products, along with US Forest Service historical fire activity data to generate probabilistic monthly fire potential maps in the United States. These maps can be useful in not only government operational allocation of fire management resources, but also improving understanding of the Earth System and how it is changing in order to refine predictions of fire extremes.

  11. Spacecraft Charging and the Microwave Anisotropy Probe Spacecraft

    NASA Technical Reports Server (NTRS)

    Timothy, VanSant J.; Neergaard, Linda F.

    1998-01-01

    The Microwave Anisotropy Probe (MAP), a MIDEX mission built in partnership between Princeton University and the NASA Goddard Space Flight Center (GSFC), will study the cosmic microwave background. It will be inserted into a highly elliptical earth orbit for several weeks and then use a lunar gravity assist to orbit around the second Lagrangian point (L2), 1.5 million kilometers, anti-sunward from the earth. The charging environment for the phasing loops and at L2 was evaluated. There is a limited set of data for L2; the GEOTAIL spacecraft measured relatively low spacecraft potentials (approx. 50 V maximum) near L2. The main area of concern for charging on the MAP spacecraft is the well-established threat posed by the "geosynchronous region" between 6-10 Re. The launch in the autumn of 2000 will coincide with the falling of the solar maximum, a period when the likelihood of a substorm is higher than usual. The likelihood of a substorm at that time has been roughly estimated to be on the order of 20% for a typical MAP mission profile. Because of the possibility of spacecraft charging, a requirement for conductive spacecraft surfaces was established early in the program. Subsequent NASCAP/GEO analyses for the MAP spacecraft demonstrated that a significant portion of the sunlit surface (solar cell cover glass and sunshade) could have nonconductive surfaces without significantly raising differential charging. The need for conductive materials on surfaces continually in eclipse has also been reinforced by NASCAP analyses.

  12. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    NASA Astrophysics Data System (ADS)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  13. Delayed gadolinium-enhanced MRI of cartilage (dGEMRIC) and T2 mapping of talar osteochondral lesions: Indicators of clinical outcomes.

    PubMed

    Rehnitz, Christoph; Kuni, Benita; Wuennemann, Felix; Chloridis, Dimitrios; Kirwadi, Anand; Burkholder, Iris; Kauczor, Hans-Ulrich; Weber, Marc-André

    2017-12-01

    To evaluate the utility of delayed gadolinium-enhanced magnetic resonance imaging of cartilage (dGEMRIC) and T 2 mapping in evaluation of type II osteochondral lesions (OCLs) of the talus and define cutoff values for identifying patients with good/poor clinical outcomes. 28 patients (mean age, 42.3 years) underwent T 2 mapping and dGEMRIC at least 1.5 years (mean duration, 3.5 years) after microfracture (n = 12) or conservative (n = 16) treatment for type II OCL. Clinical outcomes were considered good with an American Orthopedic Foot and Ankle Society score ≥80. The T 1 /T 2 -values and indices of repair tissue (RT; cartilage above the OCL) were compared to those of the adjacent normal cartilage (NC) by region-of-interest analysis. The ability of the two methods to discriminate RT from NC was determined by area under the receiver operating characteristics curve (AUC) analysis. The Youden index was maximized for T 1 /T 2 measures for identifying cutoff values indicative of good/poor clinical outcomes. Repair tissue exhibited lower dGEMRIC values (629.83 vs. 738.51 msec) and higher T 2 values (62.07 vs. 40.69 msec) than NC (P < 0.001). T 2 mapping exhibited greater AUC than dGEMRIC (0.88 vs. 0.69; P = 0.0398). All T 1 measures exhibited higher maximized Youden indices than the corresponding T 2 measures. The highest maximized Youden index for T 1difference was observed at a cutoff value of 84 msec (sensitivity, 78%; specificity, 83%). While T 2 mapping is superior to dGEMRIC in discriminating RT, the latter better identifies good/poor clinical outcomes in patients with type II talar OCL. 2 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1601-1610. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Whole genome sequences are required to fully resolve the linkage disequilibrium structure of human populations.

    PubMed

    Pengelly, Reuben J; Tapper, William; Gibson, Jane; Knut, Marcin; Tearle, Rick; Collins, Andrew; Ennis, Sarah

    2015-09-03

    An understanding of linkage disequilibrium (LD) structures in the human genome underpins much of medical genetics and provides a basis for disease gene mapping and investigating biological mechanisms such as recombination and selection. Whole genome sequencing (WGS) provides the opportunity to determine LD structures at maximal resolution. We compare LD maps constructed from WGS data with LD maps produced from the array-based HapMap dataset, for representative European and African populations. WGS provides up to 5.7-fold greater SNP density than array-based data and achieves much greater resolution of LD structure, allowing for identification of up to 2.8-fold more regions of intense recombination. The absence of ascertainment bias in variant genotyping improves the population representativeness of the WGS maps, and highlights the extent of uncaptured variation using array genotyping methodologies. The complete capture of LD patterns using WGS allows for higher genome-wide association study (GWAS) power compared to array-based GWAS, with WGS also allowing for the analysis of rare variation. The impact of marker ascertainment issues in arrays has been greatest for Sub-Saharan African populations where larger sample sizes and substantially higher marker densities are required to fully resolve the LD structure. WGS provides the best possible resource for LD mapping due to the maximal marker density and lack of ascertainment bias. WGS LD maps provide a rich resource for medical and population genetics studies. The increasing availability of WGS data for large populations will allow for improved research utilising LD, such as GWAS and recombination biology studies.

  15. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  16. 47 CFR 101.1005 - Frequencies available.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1005 Frequencies available. (a) The... is shared with private microwave point-to-point systems licensed prior to March 11, 1997, as provided...

  17. 47 CFR 101.1005 - Frequencies available.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1005 Frequencies available. (a) The... is shared with private microwave point-to-point systems licensed prior to March 11, 1997, as provided...

  18. 47 CFR 101.1005 - Frequencies available.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1005 Frequencies available. (a) The... is shared with private microwave point-to-point systems licensed prior to March 11, 1997, as provided...

  19. 47 CFR 101.1005 - Frequencies available.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1005 Frequencies available. (a) The... is shared with private microwave point-to-point systems licensed prior to March 11, 1997, as provided...

  20. 47 CFR 101.1005 - Frequencies available.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1005 Frequencies available. (a) The... is shared with private microwave point-to-point systems licensed prior to March 11, 1997, as provided...

  1. A single bout of exhaustive exercise affects integrated baroreflex function after 16 days of head-down tilt

    NASA Technical Reports Server (NTRS)

    Engelke, K. A.; Doerr, D. F.; Convertino, V. A.

    1995-01-01

    We tested the hypothesis that one bout of maximal exercise performed 24 h before reambulation from 16 days of 6 degrees head-down tilt (HDT) could increase integrated baroreflex sensitivity. Isolated carotid-cardiac and integrated baroreflex function was assessed in seven subjects before and after two periods of HDT separated by 11 mo. On the last day of one HDT period, subjects performed a single bout of maximal cycle ergometry (exercise). Subjects did not exercise after the other HDT period (control). Carotid-cardiac baroreflex sensitivity was evaluated using a neck collar device. Integrated baroreflex function was assessed by recording heart rate (HR) and blood pressure (MAP) during a 15-s Valsalva maneuver (VM) at a controlled expiratory pressure of 30 mmHg. The ratio of change in HR to change in MAP (delta HR/ delta MAP) during phases II and IV of the VM was used as an index of cardiac baroreflex sensitivity. Baroreflex-mediated vasoconstriction was assessed by measuring the late phase II rise in MAP. Following HDT, carotid-cardiac baroreflex sensitivity was reduced (2.8 to 2.0 ms/mmHg; P = 0.05) as was delta HR/ delta MAP during phase II (-1.5 to -0.8 beats/mmHg; P = 0.002). After exercise, isolated carotid baroreflex activity and phase II delta HR/ delta MAP returned to pre-HDT levels but remained attenuated in the control condition. Phase IV delta HR/ delta MAP was not altered by HDT or exercise. The late phase II increase of MAP was 71% greater after exercise compared with control (7 vs. 2 mmHg; P = 0.041).(ABSTRACT TRUNCATED AT 250 WORDS).

  2. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  3. Evaluating Fast Maximum Likelihood-Based Phylogenetic Programs Using Empirical Phylogenomic Data Sets

    PubMed Central

    Zhou, Xiaofan; Shen, Xing-Xing; Hittinger, Chris Todd

    2018-01-01

    Abstract The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs’ relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses. PMID:29177474

  4. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  5. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  6. Defoliation potential of gypsy moth

    Treesearch

    David A. Gansner; David A. Drake; Stanford L. Arner; Rachel R. Hershey; Susan L. King; Susan L. King

    1993-01-01

    A model that uses forest stand characteristics to estimate the likelihood of gypsy moth (Lymantria dispar L.) defoliation has been developed. It was applied to recent forest inventory plot data to produce susceptibility ratings and maps showing current defoliation potential in a seven-state area where gypsy moth is an immediate threat.

  7. A strategy for maximizing native plant material diversity for ecological restoration, germplasm conservation and genecology research

    Treesearch

    Berta Youtie; Nancy Shaw; Matt Fisk; Scott Jensen

    2012-01-01

    One of the most important steps in planning a restoration project is careful selection of ecologically adapted native plant material. As species-specific seed zone maps are not available for most species in the Artemisia tridentata ssp. wyomingensis (Wyoming big sagebrush) ecoregion in the Great Basin, USA, we are employing a provisional seed zone map based on annual...

  8. Capability 9.2 Mobility

    NASA Technical Reports Server (NTRS)

    Zakrasjek, June

    2005-01-01

    Modern operational concepts require significant bandwidths and multipoint communication capabilities. Provide voice, video and data communications among vehicles moving along the surface, vehicles in suborbital transport or reconnaissance, surface elements, and home planet facilities.

  9. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  10. Multipoint NIR spectroscopy for gross composition analysis of powdered infant formula under various motion conditions.

    PubMed

    Cama-Moncunill, Raquel; Markiewicz-Keszycka, Maria; Dixit, Yash; Cama-Moncunill, Xavier; Casado-Gavalda, Maria P; Cullen, Patrick J; Sullivan, Carl

    2016-07-01

    Powdered infant formula (PIF) is a worldwide, industrially produced, human milk substitute. Manufacture of PIF faces strict quality controls in order to ensure that the product meets all compositional requirements. Near-infrared (NIR) spectroscopy is a rapid, non-destructive and well-qualified technique for food quality assessments. The use of fibre-optic NIR sensors allows measuring in-line and at real-time, and can record spectra from different stages of the process. The non-contact character of fibre-optic sensors can be enhanced by fitting collimators, which allow operation at various distances. The system, based on a Fabry-Perot interferometer, records four spectra concurrently, rather than consecutively as in the "quasi-simultaneous" multipoint NIR systems. In the present study, a novel multipoint NIR spectroscopy system equipped with four fibre-optic probes with collimators was assessed to determine carbohydrate and protein contents of PIF samples under static and motion conditions (0.02, 0.15 and 0.30m/s) to simulate possible industrial scenarios. Best results were obtained under static conditions providing a R(2) of calibration of 0.95 and RMSEP values of 1.89%. Yet, considerably low values of RMSEP, for instance 2.70% at 0.15m/s, were provided with the in-motion predictions, demonstrating the system's potential for in/on-line applications at various levels of speed. The current work also evaluated the viability of using general off-line calibrations developed under static conditions for on/in-line applications subject to motion. To this end, calibrations in both modes were developed and compared. Best results were obtained with specific calibrations; however, reasonably accurate models were obtained with the general calibration. Furthermore, this work illustrated independency of the collimator-probe setup by characterizing PIF samples simultaneously recorded according to their carbohydrate content, even when measured under different conditions. Therefore, the improved multipoint NIR approach constitutes a potential in/on-line tool for quality evaluation of PIF over the manufacturing process. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. TU-FG-201-12: Designing a Risk-Based Quality Assurance Program for a Newly Implemented Y-90 Microspheres Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vile, D; Zhang, L; Cuttino, L

    2016-06-15

    Purpose: To create a quality assurance program based upon a risk-based assessment of a newly implemented SirSpheres Y-90 procedure. Methods: A process map was created for a newly implemented SirSpheres procedure at a community hospital. The process map documented each step of this collaborative procedure, as well as the roles and responsibilities of each member. From the process map, different potential failure modes were determined as well as any current controls in place. From this list, a full failure mode and effects analysis (FMEA) was performed by grading each failure mode’s likelihood of occurrence, likelihood of detection, and potential severity.more » These numbers were then multiplied to compute the risk priority number (RPN) for each potential failure mode. Failure modes were then ranked based on their RPN. Additional controls were then added, with failure modes corresponding to the highest RPNs taking priority. Results: A process map was created that succinctly outlined each step in the SirSpheres procedure in its current implementation. From this, 72 potential failure modes were identified and ranked according to their associated RPN. Quality assurance controls and safety barriers were then added for failure modes associated with the highest risk being addressed first. Conclusion: A quality assurance program was created from a risk-based assessment of the SirSpheres process. Process mapping and FMEA were effective in identifying potential high-risk failure modes for this new procedure, which were prioritized for new quality assurance controls. TG 100 recommends the fault tree analysis methodology to design a comprehensive and effective QC/QM program, yet we found that by simply introducing additional safety barriers to address high RPN failure modes makes the whole process simpler and safer.« less

  12. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation maymore » decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.« less

  13. Relative impact of previous disturbance history on the likelihood of additional disturbance in the Northern United States Forest Service USFS Region

    NASA Astrophysics Data System (ADS)

    Hernandez, A. J.

    2015-12-01

    The Landsat archive is increasingly being used to detect trends in the occurrence of forest disturbance. Beyond information about the amount of area affected, forest managers need to know if and how disturbance regimes change. The National Forest System (NFS) has developed a comprehensive plan for carbon monitoring that requires a detailed temporal mapping of forest disturbances across 75 million hectares. A long-term annual time series that shows the timing, extent, and type of disturbance beginning in 1990 and ending in 2011 has been prepared for several USFS Regions, including the Northern Region. Our mapping starts with an automated detection of annual disturbances using a time series of historical Landsat imagery. Automated detections are meticulously inspected, corrected and labeled using various USFS ancillary datasets. The resulting maps of verified disturbance show the timing and types are fires, harvests, insect activity, disease, and abiotic (wind, drought, avalanche) damage. Also, the magnitude of each change event is modeled in terms of the proportion of canopy cover lost. The sequence of disturbances for every pixel since 1990 has been consistently mapped and is available across the entirety of NFS. Our datasets contain sufficient information to describe the frequency of stand replacement, as well as how often disturbance results in only a partial loss of canopy. This information provides empirical insight into how an initial disturbance may predispose a stand to further disturbance, and it also show a climatic signal in the occurrence of processes such as fire and insect epidemics. Thus, we have the information to model the likelihood of occurrence of certain disturbances after a given event (i.e. if we have a fire in the past what does that do to the likelihood of occurrence of insects in the future). Here, we explore if previous disturbance history is a reliable predictor of additional disturbance in the future and we present results of applying logistic regression to obtain predicted probabilities of occurrence of additional disturbance types. We describe responses in additional disturbance and prominent trends for each major forest type.

  14. Measuring and partitioning the high-order linkage disequilibrium by multiple order Markov chains.

    PubMed

    Kim, Yunjung; Feng, Sheng; Zeng, Zhao-Bang

    2008-05-01

    A map of the background levels of disequilibrium between nearby markers can be useful for association mapping studies. In order to assess the background levels of linkage disequilibrium (LD), multilocus LD measures are more advantageous than pairwise LD measures because the combined analysis of pairwise LD measures is not adequate to detect simultaneous allele associations among multiple markers. Various multilocus LD measures based on haplotypes have been proposed. However, most of these measures provide a single index of association among multiple markers and does not reveal the complex patterns and different levels of LD structure. In this paper, we employ non-homogeneous, multiple order Markov Chain models as a statistical framework to measure and partition the LD among multiple markers into components due to different orders of marker associations. Using a sliding window of multiple markers on phased haplotype data, we compute corresponding likelihoods for different Markov Chain (MC) orders in each window. The log-likelihood difference between the lowest MC order model (MC0) and the highest MC order model in each window is used as a measure of the total LD or the overall deviation from the gametic equilibrium for the window. Then, we partition the total LD into lower order disequilibria and estimate the effects from two-, three-, and higher order disequilibria. The relationship between different orders of LD and the log-likelihood difference involving two different orders of MC models are explored. By applying our method to the phased haplotype data in the ENCODE regions of the HapMap project, we are able to identify high/low multilocus LD regions. Our results reveal that the most LD in the HapMap data is attributed to the LD between adjacent pairs of markers across the whole region. LD between adjacent pairs of markers appears to be more significant in high multilocus LD regions than in low multilocus LD regions. We also find that as the multilocus total LD increases, the effects of high-order LD tends to get weaker due to the lack of observed multilocus haplotypes. The overall estimates of first, second, third, and fourth order LD across the ENCODE regions are 64, 23, 9, and 3%.

  15. Dual-Particle Imaging System with Neutron Spectroscopy for Safeguard Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamel, Michael C.; Weber, Thomas M.

    2017-11-01

    A dual-particle imager (DPI) has been designed that is capable of detecting gamma-ray and neutron signatures from shielded SNM. The system combines liquid organic and NaI(Tl) scintillators to form a combined Compton and neutron scatter camera. Effective image reconstruction of detected particles is a crucial component for maximizing the performance of the system; however, a key deficiency exists in the widely used iterative list-mode maximum-likelihood estimation-maximization (MLEM) image reconstruction technique. For MLEM a stopping condition is required to achieve a good quality solution but these conditions fail to achieve maximum image quality. Stochastic origin ensembles (SOE) imaging is a goodmore » candidate to address this problem as it uses Markov chain Monte Carlo to reach a stochastic steady-state solution. The application of SOE to the DPI is presented in this work.« less

  16. Low Altitude AVIRIS Data for Mapping Land Cover in Yellowstone National Park: Use of Isodata Clustering Techniques

    NASA Technical Reports Server (NTRS)

    Spruce, Joe

    2001-01-01

    Yellowstone National Park (YNP) contains a diversity of land cover. YNP managers need site-specific land cover maps, which may be produced more effectively using high-resolution hyperspectral imagery. ISODATA clustering techniques have aided operational multispectral image classification and may benefit certain hyperspectral data applications if optimally applied. In response, a study was performed for an area in northeast YNP using 11 select bands of low-altitude AVIRIS data calibrated to ground reflectance. These data were subjected to ISODATA clustering and Maximum Likelihood Classification techniques to produce a moderately detailed land cover map. The latter has good apparent overall agreement with field surveys and aerial photo interpretation.

  17. 47 CFR 101.139 - Authorization of transmitters.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.139 Authorization of transmitters. (a) Unless...-point microwave and point-to-multipoint services under this part must be a type that has been verified...

  18. 47 CFR 101.139 - Authorization of transmitters.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.139 Authorization of transmitters. (a) Unless...-point microwave and point-to-multipoint services under this part must be a type that has been verified...

  19. 47 CFR 101.139 - Authorization of transmitters.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.139 Authorization of transmitters. (a) Unless...-point microwave and point-to-multipoint services under this part must be a type that has been verified...

  20. 47 CFR 101.139 - Authorization of transmitters.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.139 Authorization of transmitters. (a) Unless...-point microwave and point-to-multipoint services under this part must be a type that has been verified...

  1. New Options for AV in Telecommunications

    ERIC Educational Resources Information Center

    Gelman, Morrie

    1974-01-01

    A discussion of MDS, a multi-point distribution service which is a recently evolved supplementary telecommunications service that suggests new possibilities for improving the reach, impact and effectiveness of audio-visual communications. (Author)

  2. A tree island approach to inferring phylogeny in the ant subfamily Formicinae, with especial reference to the evolution of weaving.

    PubMed

    Johnson, Rebecca N; Agapow, Paul-Michael; Crozier, Ross H

    2003-11-01

    The ant subfamily Formicinae is a large assemblage (2458 species (J. Nat. Hist. 29 (1995) 1037), including species that weave leaf nests together with larval silk and in which the metapleural gland-the ancestrally defining ant character-has been secondarily lost. We used sequences from two mitochondrial genes (cytochrome b and cytochrome oxidase 2) from 18 formicine and 4 outgroup taxa to derive a robust phylogeny, employing a search for tree islands using 10000 randomly constructed trees as starting points and deriving a maximum likelihood consensus tree from the ML tree and those not significantly different from it. Non-parametric bootstrapping showed that the ML consensus tree fit the data significantly better than three scenarios based on morphology, with that of Bolton (Identification Guide to the Ant Genera of the World, Harvard University Press, Cambridge, MA) being the best among these alternative trees. Trait mapping showed that weaving had arisen at least four times and possibly been lost once. A maximum likelihood analysis showed that loss of the metapleural gland is significantly associated with the weaver life-pattern. The graph of the frequencies with which trees were discovered versus their likelihood indicates that trees with high likelihoods have much larger basins of attraction than those with lower likelihoods. While this result indicates that single searches are more likely to find high- than low-likelihood tree islands, it also indicates that searching only for the single best tree may lose important information.

  3. Quantification of type I error probabilities for heterogeneity LOD scores.

    PubMed

    Abreu, Paula C; Hodge, Susan E; Greenberg, David A

    2002-02-01

    Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.

  4. Indoor Positioning System Using Magnetic Field Map Navigation and an Encoder System

    PubMed Central

    Kim, Han-Sol; Seo, Woojin; Baek, Kwang-Ryul

    2017-01-01

    In the indoor environment, variation of the magnetic field is caused by building structures, and magnetic field map navigation is based on this feature. In order to estimate position using this navigation, a three-axis magnetic field must be measured at every point to build a magnetic field map. After the magnetic field map is obtained, the position of the mobile robot can be estimated with a likelihood function whereby the measured magnetic field data and the magnetic field map are used. However, if only magnetic field map navigation is used, the estimated position can have large errors. In order to improve performance, we propose a particle filter system that integrates magnetic field map navigation and an encoder system. In this paper, multiple magnetic sensors and three magnetic field maps (a horizontal intensity map, a vertical intensity map, and a direction information map) are used to update the weights of particles. As a result, the proposed system estimates the position and orientation of a mobile robot more accurately than previous systems. Also, when the number of magnetic sensors increases, this paper shows that system performance improves. Finally, experiment results are shown from the proposed system that was implemented and evaluated. PMID:28327513

  5. Indoor Positioning System Using Magnetic Field Map Navigation and an Encoder System.

    PubMed

    Kim, Han-Sol; Seo, Woojin; Baek, Kwang-Ryul

    2017-03-22

    In the indoor environment, variation of the magnetic field is caused by building structures, and magnetic field map navigation is based on this feature. In order to estimate position using this navigation, a three-axis magnetic field must be measured at every point to build a magnetic field map. After the magnetic field map is obtained, the position of the mobile robot can be estimated with a likelihood function whereby the measured magnetic field data and the magnetic field map are used. However, if only magnetic field map navigation is used, the estimated position can have large errors. In order to improve performance, we propose a particle filter system that integrates magnetic field map navigation and an encoder system. In this paper, multiple magnetic sensors and three magnetic field maps (a horizontal intensity map, a vertical intensity map, and a direction information map) are used to update the weights of particles. As a result, the proposed system estimates the position and orientation of a mobile robot more accurately than previous systems. Also, when the number of magnetic sensors increases, this paper shows that system performance improves. Finally, experiment results are shown from the proposed system that was implemented and evaluated.

  6. Multi-Component, Multi-Point Interferometric Rayleigh/Mie Doppler Velocimeter

    NASA Technical Reports Server (NTRS)

    Danehy, Paul M.; Lee, Joseph W.; Bivolaru, Daniel

    2012-01-01

    An interferometric Rayleigh scattering system was developed to enable the measurement of multiple, orthogonal velocity components at several points within very-high-speed or high-temperature flows. The velocity of a gaseous flow can be optically measured by sending laser light into the gas flow, and then measuring the scattered light signal that is returned from matter within the flow. Scattering can arise from either gas molecules within the flow itself, known as Rayleigh scattering, or from particles within the flow, known as Mie scattering. Measuring Mie scattering is the basis of all commercial laser Doppler and particle imaging velocimetry systems, but particle seeding is problematic when measuring high-speed and high-temperature flows. The velocimeter is designed to measure the Doppler shift from only Rayleigh scattering, and does not require, but can also measure, particles within the flow. The system combines a direct-view, large-optic interferometric setup that calculates the Doppler shift from fringe patterns collected with a digital camera, and a subsystem to capture and re-circulate scattered light to maximize signal density. By measuring two orthogonal components of the velocity at multiple positions in the flow volume, the accuracy and usefulness of the flow measurement increase significantly over single or nonorthogonal component approaches.

  7. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Bayesian Estimation of Cosmic Microwave Background Polarization Maps

    NASA Astrophysics Data System (ADS)

    Dunkley, J.; Spergel, D. N.; Komatsu, E.; Hinshaw, G.; Larson, D.; Nolta, M. R.; Odegard, N.; Page, L.; Bennett, C. L.; Gold, B.; Hill, R. S.; Jarosik, N.; Weiland, J. L.; Halpern, M.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wollack, E.; Wright, E. L.

    2009-08-01

    We describe a sampling method to estimate the polarized cosmic microwave background (CMB) signal from observed maps of the sky. We use a Metropolis-within-Gibbs algorithm to estimate the polarized CMB map, containing Q and U Stokes parameters at each pixel, and its covariance matrix. These can be used as inputs for cosmological analyses. The polarized sky signal is parameterized as the sum of three components: CMB, synchrotron emission, and thermal dust emission. The polarized Galactic components are modeled with spatially varying power-law spectral indices for the synchrotron, and a fixed power law for the dust, and their component maps are estimated as by-products. We apply the method to simulated low-resolution maps with pixels of side 7.2 deg, using diagonal and full noise realizations drawn from the WMAP noise matrices. The CMB maps are recovered with goodness of fit consistent with errors. Computing the likelihood of the E-mode power in the maps as a function of optical depth to reionization, τ, for fixed temperature anisotropy power, we recover τ = 0.091 ± 0.019 for a simulation with input τ = 0.1, and mean τ = 0.098 averaged over 10 simulations. A "null" simulation with no polarized CMB signal has maximum likelihood consistent with τ = 0. The method is applied to the five-year WMAP data, using the K, Ka, Q, and V channels. We find τ = 0.090 ± 0.019, compared to τ = 0.086 ± 0.016 from the template-cleaned maps used in the primary WMAP analysis. The synchrotron spectral index, β, averaged over high signal-to-noise pixels with standard deviation σ(β) < 0.25, but excluding ~6% of the sky masked in the Galactic plane, is -3.03 ± 0.04. This estimate does not vary significantly with Galactic latitude, although includes an informative prior. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.

  8. Mars Mission Optimization Based on Collocation of Resources

    NASA Technical Reports Server (NTRS)

    Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.

    2003-01-01

    This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.

  9. Alternative licensing arrangements and spectrum economics: The case of multipoint distribution service

    NASA Technical Reports Server (NTRS)

    Agnew, C. E.

    1981-01-01

    At present, the Federal Communications Commission assigns radio licenses following a determination of the public interest. Whenever mutually conflicting license applications are filed, the Commission holds a comparative hearing. This assignment mechanism is criticized as cumbersome and unrealiable, and three alternatives are proposed: increasing the available spectrum, and either auctions or lotteries of radio licenses. An analysis is presented of the present system and these alternative arrangments for assigning rights to the frequency spectrum for the Multipoint Distribution Service (MDS). Although MDS is a relatively minor radio service, it serves as a prototype for message distribution services with a large potential for use in business communications. Moreover, the way in which the initial batch of MDS licenses was assigned provides a unique opportunity for empirical work on the economics of the licensing process.

  10. Multi-Point Thomson Scattering Diagnostic for the Helicity Injected Torus

    NASA Astrophysics Data System (ADS)

    Liptac, J. E.; Smith, R. J.; Hoffman, C. S.; Jarboe, T. R.; Nelson, B. A.; Leblanc, B. P.; Phillips, P.

    1999-11-01

    The multi-point Thomson scattering system on the Helicity Injected Torus--II can determine electron temperature and density at 11 radial positions at a single time during the plasma discharge. The system includes components on loan from both PPPL and from the University of Texas. The collection optics and Littrow spectrometer from Princeton, and the 1 GW laser and multi-anode microchannel plate detector from Texas have been integrated into a compact structure, creating a mobile and reliable diagnostic. The mobility of the system allows alignment to occur in a room adjacent to the experiment, greatly reducing the disturbance to normal machine operation. The four main parts of the Thomson scattering system, namely, the laser, the beam line, the collection optics, and the mobile structure are presented and discussed.

  11. Research on the Wire Network Signal Prediction Based on the Improved NNARX Model

    NASA Astrophysics Data System (ADS)

    Zhang, Zipeng; Fan, Tao; Wang, Shuqing

    It is difficult to obtain accurately the wire net signal of power system's high voltage power transmission lines in the process of monitoring and repairing. In order to solve this problem, the signal measured in remote substation or laboratory is employed to make multipoint prediction to gain the needed data. But, the obtained power grid frequency signal is delay. In order to solve the problem, an improved NNARX network which can predict frequency signal based on multi-point data collected by remote substation PMU is describes in this paper. As the error curved surface of the NNARX network is more complicated, this paper uses L-M algorithm to train the network. The result of the simulation shows that the NNARX network has preferable predication performance which provides accurate real time data for field testing and maintenance.

  12. Approaching Solar Maximum 24 with Stereo-Multipoint Observations of Solar Energetic Particle Events

    NASA Technical Reports Server (NTRS)

    Dresing, N.; Cohen, C. M. S.; Gomez-Herrero, R.; Heber, B.; Klassen, A.; Leske, R. A.; Mason, G. M.; Mewaldt, R. A.; von Rosenvinge, T. T.

    2014-01-01

    Since the beginning of the Solar Terrestrial Relations Observatory (STEREO) mission at the end of 2006, the two spacecraft have now separated by more than 130? degrees from the Earth. A 360-degree view of the Sun has been possible since February 2011, providing multipoint in situ and remote sensing observations of unprecedented quality. Combining STEREO observations with near-Earth measurements allows the study of solar energetic particle (SEP) events over a wide longitudinal range with minimal radial gradient effects. This contribution provides an overview of recent results obtained by the STEREO/IMPACT team in combination with observations by the ACE and SOHO spacecraft. We focus especially on multi-spacecraft investigations of SEP events. The large longitudinal spread of electron and 3He-rich events as well as unusual anisotropies will be presented and discussed.

  13. Searching susceptibility loci for bipolar disorder: a sib pair study on chromosome 12.

    PubMed

    Lorenzi, Cristina; Delmonte, Dario; Pirovano, Adele; Marino, Elena; Bongiorno, Fanny; Catalano, Marco; Colombo, Cristina; Bramanti, Placido; Smeraldi, Enrico

    2010-01-01

    Several linkage studies demonstrated that different chromosomal regions are involved in the susceptibility to bipolar disorder. In particular, some genome scans evidenced the role of chromosome 12. For this reason, our group chose this chromosome for a preliminary genome scan on a sample of 137 Italian sib pairs, including at least 1 bipolar subject. The analyses were carried out by means of DNA extracted from whole blood. DNA samples were genotyped by 19 simple tandem repeat markers (microsatellites). Starting from the genetic data, we performed two- and multipoint linkage analyses (both parametric and nonparametric) by means of Easy Linkage plus package (version 5.05). The multipoint linkage analyses pointed out a region suggestive of linkage between the markers D12S310 and D12S364, at locus 12p12. In particular, we reached the best evidence of linkage performing multipoint analyses and assuming a recessive model, under the hypothesis of genetic heterogeneity (heterogeneity LOD score = 2.01 and alpha = 0.77). It is interesting to notice that the region at the marker D12S364 is located inside the gene coding for the glutamatergic receptor GRIN2B. Therefore, our finding not only confirmed the role of genetics in determining liability to bipolar disorder, but suggested glutamatergic transmission impairment as a possible cause. Nevertheless, we acknowledge that our study is heavily underpowered. Therefore, independent replication is needed. (c) 2009 S. Karger AG, Basel.

  14. Sequential multipoint motion of the tympanic membrane measured by laser Doppler vibrometry: preliminary results for normal tympanic membrane.

    PubMed

    Kunimoto, Yasuomi; Hasegawa, Kensaku; Arii, Shiro; Kataoka, Hideyuki; Yazama, Hiroaki; Kuya, Junko; Kitano, Hiroya

    2014-04-01

    Numerous studies have reported sound-induced motion of the tympanic membrane (TM). To demonstrate sequential motion characteristics of the entire TM by noncontact laser Doppler vibrometry (LDV), we have investigated multipoint TM measurement. A laser Doppler vibrometer was mounted on a surgical microscope. The velocity was measured at 33 points on the TM using noncontact LDV without any reflectors. Measurements were performed with tonal stimuli of 1, 3, and 6 kHz. Amplitudes were calculated from these measurements, and time-dependent changes in TM motion were described using a graphics application. TM motions were detected more clearly and stably at 1 and 3 kHz than at other frequencies. This is because the external auditory canal acted as a resonant tube near 3 kHz. TM motion displayed 1 peak at 1 kHz and 2 peaks at 3 kHz. Large amplitudes were detected in the posterosuperior quadrant (PSQ) at 1 kHz and in the PSQ and anteroinferior quadrant (AIQ) at 3 kHz. The entire TM showed synchronized movement centered on the PSQ at 1 kHz, with phase-shifting between PSQ and AIQ movement at 3 kHz. Amplitude was smaller at the umbo than at other parts. In contrast, amplitudes at high frequencies were too small and complicated to detect any obvious peaks. Sequential multipoint motion of the tympanic membrane showed that vibration characteristics of the TM differ according to the part and frequency.

  15. Facilitating the exploitation of ERTS imagery using snow enhancement techniques. [geological mapping of New England test area

    NASA Technical Reports Server (NTRS)

    Wobber, F. J.; Martin, K. R. (Principal Investigator); Amato, R. V.; Leshendok, T.

    1974-01-01

    The author has identified the following significant results. The procedure for conducting a regional geological mapping program utilizing snow-enhanced ERTS-1 imagery has been summarized. While it is recognized that mapping procedures in geological programs will vary from area to area and from geologist to geologist, it is believed that the procedure tested in this project is applicable over a wide range of mapping programs. The procedure is designed to maximize the utility and value of ERTS-1 imagery and aerial photography within the initial phase of geological mapping programs. Sample products which represent interim steps in the mapping formula (e.g. the ERTS Fracture-Lineament Map) have been prepared. A full account of these procedures and products will be included within the Snow Enhancement Users Manual.

  16. Genetic and physical mapping of the Treacher Collins syndrome locus with respect to loci in the chromosome 5q3 region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jabs, E.W.; Li, Xiang; Coss, C.

    Treacher Collins syndrome is an autosomal dominant, craniofacial developmental disorder, and its locus (TCOF1) has been mapped to chromosome 5q3. To refine the location of the gene within this region, linkage analysis was performed among the TCOF1 locus and 12 loci (IL9, FGFA, GRL, D5S207, D5S210, D5S376, CSF1R, SPARC, D5S119, D5S209, D5S527, FGFR4) in 13 Treacher Collins syndrome families. The highest maximum lod score was obtained between loci TCOF1 and D5S210 (Z = 10.52; [theta] = 0.02 [+-] 0.07). The best order, IL9-GRL-D5S207/D5S210-CSF1R-SPARC-D5S119, and genetic distances among these loci were determined in the 40 CEPH families by multipoint linkage analysis.more » YAC clones were used to establish the order of loci, centromere-5[prime]GRL3[prime]-D5S207-D5S210-D5S376-CSF1R-SPARC-D5S119-telomere. By combining known physical mapping data with ours, the order of chromosome 5q3 markers is centomere-IL9-FGFA-5[prime]GRL3[prime]-D5s207-D5S210-D5S376-CSF1R-SPARC-D5S119-D5S209-FGFR4-telomere. Based on this order, haplotype analysis suggests that the TCOF1 locus resides distal CSF1R and proximal to SPARC within a region less than 1 Mb in size. 29 refs., 2 figs., 2 tabs.« less

  17. LODVIEW: a computer program for the graphical evaluation of lod score results in exclusion mapping of human disease genes.

    PubMed

    Hildebrandt, F; Pohlmann, A; Omran, H

    1993-12-01

    For linkage analysis projects aimed at mapping hereditary disease genes in humans, hundreds of highly polymorphic microsatellite markers which can be typed by PCR (PCR markers) have become available. With this technical improvement, the availability of a technique allowing for transparency in the handling of rapidly generated lod score data is becoming important. We present a computer program LODVIEW for the graphical representation of lod score data. It is designed for the input of lod score data generated with the LINKAGE package or similar programs. LODVIEW consists of 24 preformatted files, one for each chromosome. Each file contains a table for the input of lod score data and a file for the graphical representation of the data, which will show automatically any entry that is made in the respective input table. The program provides the user with published PCR marker information pre-entered into a table and graph at the correct positions corresponding to the genetic distances between markers. The graphical display of LODVIEW allows for the rapid evaluation of lod score results calculated from PCR markers on each chromosome. The following information can be obtained from the graphical display at one glance: (i) Regions of exclusion (Z(theta) < -2) and nonexclusion, (ii) markers with positive lod scores, (iii) the distribution of positive and negative lod scores among the families examined (indication of genetic heterogeneity), (iv) multipoint lod scores, and (v) the availability of PCR markers in regions of interest. The program is continually updated for novel PCR marker information from the literature. The program will help to efficiently monitor and direct the progress of exclusion mapping projects.

  18. The maximal amount of dietary alpha-tocopherol intake in U.S. adults (NHANES 2001-2002).

    PubMed

    Gao, Xiang; Wilde, Parke E; Lichtenstein, Alice H; Bermudez, Odilia I; Tucker, Katherine L

    2006-04-01

    The current study was designed to determine the maximal amount of alpha-tocopherol intake obtained from food in the U.S. diet, and to examine the effect of different food group intakes on this amount. Data from 2138 men and 2213 women aged >18 y were obtained from the National Health and Nutrition Examination Survey (NHANES) 2001-2002. Linear programming was used to generate diets with maximal alpha-tocopherol intake, with the conditions of meeting the recommended daily allowances or adequate intakes for a set of nutrients, sodium and fat recommendations, and energy limits, and that were compatible with the observed dietary patterns in the population. With food use and energy constraints in models, diets formulated by linear programming provided 19.3-24.9 mg alpha-tocopherol for men and women aged 19-50 or >50 y. These amounts decreased to 15.4-19.9 mg with the addition of the sodium, dietary reference intake, and fat constraints. The relations between maximal alpha-tocopherol intake and food group intakes were influenced by total fat restrictions. Although meeting current recommendations (15 mg/d) appears feasible for individuals, dramatic dietary changes that include greater intakes of nuts and seeds, and fruit and vegetables, are needed. Careful selection of the highest vitamin E source foods within these groups could further increase the likelihood of meeting the current recommended daily allowance.

  19. Wildfire potential mapping over the state of Mississippi: A land surface modeling approach

    Treesearch

    William H. Cooke; Georgy V. Mostovoy; Valentine G. Anantharaj; W. Matt Jolly

    2012-01-01

    A relationship between the likelihood of wildfires and various drought metrics (soil moisture-based fire potential indices) were examined over the southern part of Mississippi. The following three indices were tested and used to simulate spatial and temporal wildfire probability changes: (1) the accumulated difference between daily precipitation and potential...

  20. Categorical likelihood method for combining NDVI and elevation information for cotton precision agricultural applications

    USDA-ARS?s Scientific Manuscript database

    This presentation investigates an algorithm to fuse the Normalized Difference Vegetation Index (NDVI) with LiDAR elevation data to produce a map useful for the site-specific scouting and pest management (Willers et al. 1999; 2005; 2009) of the cotton insect pests, the tarnished plant bug (Lygus lin...

  1. Mapping benthic macroalgal communities in the coastal zone using CHRIS-PROBA mode 2 images

    NASA Astrophysics Data System (ADS)

    Casal, G.; Kutser, T.; Domínguez-Gómez, J. A.; Sánchez-Carnero, N.; Freire, J.

    2011-09-01

    The ecological importance of benthic macroalgal communities in coastal ecosystems has been recognised worldwide and the application of remote sensing to study these communities presents certain advantages respect to in situ methods. The present study used three CHRIS-PROBA images to analyse macroalgal communities distribution in the Seno de Corcubión (NW Spain). The use of this sensor represent a challenge given that its design, build and deployment programme is intended to follow the principles of the "faster, better, cheaper". To assess the application of this sensor to macroalgal mapping, two types of classifications were carried out: Maximum Likelihood and Spectral Angle Mapper (SAM). Maximum Likelihood classifier showed positive results, reaching overall accuracy percentages higher than 90% and kappa coefficients higher than 0.80 for the bottom classes shallow submerged sand, deep submerged sand, macroalgae less than 5 m and macroalgae between 5 and 10 m depth. The differentiation among macroalgal groups using SAM classifications showed positive results for green seaweeds although the differentiation between brown and red algae was not clear in the study area.

  2. Technology-driven dietary assessment: a software developer’s perspective

    PubMed Central

    Buday, Richard; Tapia, Ramsey; Maze, Gary R.

    2015-01-01

    Dietary researchers need new software to improve nutrition data collection and analysis, but creating information technology is difficult. Software development projects may be unsuccessful due to inadequate understanding of needs, management problems, technology barriers or legal hurdles. Cost overruns and schedule delays are common. Barriers facing scientific researchers developing software include workflow, cost, schedule, and team issues. Different methods of software development and the role that intellectual property rights play are discussed. A dietary researcher must carefully consider multiple issues to maximize the likelihood of success when creating new software. PMID:22591224

  3. Mobile Stroke Unit Reduces Time to Image Acquisition and Reporting.

    PubMed

    Nyberg, E M; Cox, J R; Kowalski, R G; Duarte, D V; Schimpf, B; Jones, W J

    2018-05-17

    Timely administration of thrombolytic therapy is critical to maximizing the likelihood of favorable outcomes in patients with acute ischemic stroke. Although emergency medical service activation overall improves the timeliness of acute stroke treatment, the time from emergency medical service dispatch to hospital arrival unavoidably decreases the timeliness of thrombolytic administration. Our mobile stroke unit, a new-generation ambulance with on-board CT scanning capability, reduces key imaging time metrics and facilitates in-the-field delivery of IV thrombolytic therapy. © 2018 by American Journal of Neuroradiology.

  4. A blind search for a common signal in gravitational wave detectors

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Creswell, James; von Hausegger, Sebastian; Jackson, Andrew D.; Naselsky, Pavel

    2018-02-01

    We propose a blind, template-free method for the extraction of a common signal between the Hanford and Livingston detectors and apply it especially to the GW150914 event. We construct a log-likelihood method that maximizes the cross-correlation between each detector and the common signal and minimizes the cross-correlation between the residuals. The reliability of this method is tested using simulations with an injected common signal. Finally, our method is used to assess the quality of theoretical gravitational wave templates for GW150914.

  5. A primer on criticality safety

    DOE PAGES

    Costa, David A.; Cournoyer, Michael E.; Merhege, James F.; ...

    2017-05-01

    Criticality is the state of a nuclear chain reacting medium when the chain reaction is just self-sustaining (or critical). Criticality is dependent on nine interrelated parameters. Moreover, we design criticality safety controls in order to constrain these parameters to minimize fissions and maximize neutron leakage and absorption in other materials, which makes criticality more difficult or impossible to achieve. We present the consequences of criticality accidents are discussed, the nine interrelated parameters that combine to affect criticality are described, and criticality safety controls used to minimize the likelihood of a criticality accident are presented.

  6. Surface-mediated nucleation in the solid-state polymorph transformation of terephthalic acid.

    PubMed

    Beckham, Gregg T; Peters, Baron; Starbuck, Cindy; Variankaval, Narayan; Trout, Bernhardt L

    2007-04-18

    A molecular mechanism for nucleation for the solid-state polymorph transformation of terephthalic acid is presented. New methods recently developed in our group, aimless shooting and likelihood maximization, are employed to construct a model for the reaction coordinate for the two system sizes studied. The reaction coordinate approximation is validated using the committor probability analysis. The transformation proceeds via a localized, elongated nucleus along the crystal edge formed by fluctuations in the supramolecular synthons, suggesting a nucleation and growth mechanism in the macroscopic system.

  7. Mapping quantitative trait loci for binary trait in the F2:3 design.

    PubMed

    Zhu, Chengsong; Zhang, Yuan-Ming; Guo, Zhigang

    2008-12-01

    In the analysis of inheritance of quantitative traits with low heritability, an F(2:3) design that genotypes plants in F(2) and phenotypes plants in F(2:3) progeny is often used in plant genetics. Although statistical approaches for mapping quantitative trait loci (QTL) in the F(2:3) design have been well developed, those for binary traits of biological interest and economic importance are seldom addressed. In this study, an attempt was made to map binary trait loci (BTL) in the F(2:3) design. The fundamental idea was: the F(2) plants were genotyped, all phenotypic values of each F(2:3) progeny were measured for binary trait, and these binary trait values and the marker genotype informations were used to detect BTL under the penetrance and liability models. The proposed method was verified by a series of Monte-Carlo simulation experiments. These results showed that maximum likelihood approaches under the penetrance and liability models provide accurate estimates for the effects and the locations of BTL with high statistical power, even under of low heritability. Moreover, the penetrance model is as efficient as the liability model, and the F(2:3) design is more efficient than classical F(2) design, even though only a single progeny is collected from each F(2:3) family. With the maximum likelihood approaches under the penetrance and the liability models developed in this study, we can map binary traits as we can do for quantitative trait in the F(2:3) design.

  8. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  9. Bayesian inference for OPC modeling

    NASA Astrophysics Data System (ADS)

    Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.

    2016-03-01

    The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.

  10. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    PubMed

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  11. Insecticide resistance, control failure likelihood and the First Law of Geography.

    PubMed

    Guedes, Raul Narciso C

    2017-03-01

    Insecticide resistance is a broadly recognized ecological backlash resulting from insecticide use and is widely reported among arthropod pest species with well-recognized underlying mechanisms and consequences. Nonetheless, insecticide resistance is the subject of evolving conceptual views that introduces a different concept useful if recognized in its own right - the risk or likelihood of control failure. Here we suggest an experimental approach to assess the likelihood of control failure of an insecticide allowing for consistent decision-making regarding management of insecticide resistance. We also challenge the current emphasis on limited spatial sampling of arthropod populations for resistance diagnosis in favor of comprehensive spatial sampling. This necessarily requires larger population sampling - aiming to use spatial analysis in area-wide surveys - to recognize focal points of insecticide resistance and/or control failure that will better direct management efforts. The continuous geographical scale of such surveys will depend on the arthropod pest species, the pattern of insecticide use and many other potential factors. Regardless, distance dependence among sampling sites should still hold, following the maxim that the closer two things are, the more they resemble each other, which is the basis of Tobler's First Law of Geography. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  12. Further localization of the gene for nevoid basal cell carcinoma syndrome (NBCCS) in 15 Australasian families: Linkage and loss of heterozygosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chenevix-Trench, G.; Wicking, C.; Berkman, J.

    Nevoid basal cell carcinoma syndrome (NBCCS; basal cell nevus syndrome or Gorlin syndrome) is a cancer-predisposition syndrome characterized by multiple basal cell carcinomas (BCCs) and diverse developmental defects. The gene for NBCCS has been mapped to 9q23.1-q31 in North Americal and European families. In addition, loss of heterozygosity (LOH) for genetic markers in this region has been detected in sporadic BCCs, indicating that the NBCCs gene is probably a tumor-suppressor gene. In this study the authors have determined that the NBCCS gene is also linked to this region in Australasian pedigrees and that there is no significant evidence of heterogeneity.more » They have defined the localization of the gene by multipoint and haplotype analysis of 15 families, using four microsatellite markers. LOH at these loci was detected in 50% of sporadic BCCs, a rate that is significantly higher than that in other skin lesions used as controls. 21 refs., 3 figs., 2 tabs.« less

  13. Machado-Joseph disease in pedigrees of Azorean descent is linked to chromosome 14.

    PubMed

    St George-Hyslop, P; Rogaeva, E; Huterer, J; Tsuda, T; Santos, J; Haines, J L; Schlumpf, K; Rogaev, E I; Liang, Y; McLachlan, D R

    1994-07-01

    A locus for Machado-Joseph disease (MJD) has recently been mapped to a 30-cM region of chromosome 14q in five pedigrees of Japanese descent. MJD is a clinically pleomorphic neurodegenerative disease that was originally described in subjects of Azorean descent. In light of the nonallelic heterogeneity in other inherited spinocerebellar ataxias, we were interested to determine if the MJD phenotype in Japanese and Azorean pedigrees arose from mutations at the same locus. We provide evidence that MJD in five pedigrees of Azorean descent is also linked to chromosome 14q in an 18-cM region between the markers D14S67 and AACT (multipoint lod score +7.00 near D14S81). We also report molecular evidence for homozygosity at the MJD locus in an MJD-affected subject with severe, early-onset symptoms. These observations confirm the initial report of linkage of MJD to chromosome 14; suggest that MJD in Japanese and Azorean subjects may represent allelic or identical mutations at the same locus; and provide one possible explanation (MJD gene dosage) for the observed phenotypic heterogeneity in this disease.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miranda, Adelaide; De Beule, Pieter A. A., E-mail: pieter.de-beule@inl.int; Martins, Marco

    Combined microscopy techniques offer the life science research community a powerful tool to investigate complex biological systems and their interactions. Here, we present a new combined microscopy platform based on fluorescence optical sectioning microscopy through aperture correlation microscopy with a Differential Spinning Disk (DSD) and nanomechanical mapping with an Atomic Force Microscope (AFM). The illumination scheme of the DSD microscope unit, contrary to standard single or multi-point confocal microscopes, provides a time-independent illumination of the AFM cantilever. This enables a distortion-free simultaneous operation of fluorescence optical sectioning microscopy and atomic force microscopy with standard probes. In this context, we discussmore » sample heating due to AFM cantilever illumination with fluorescence excitation light. Integration of a DSD fluorescence optical sectioning unit with an AFM platform requires mitigation of mechanical noise transfer of the spinning disk. We identify and present two solutions to almost annul this noise in the AFM measurement process. The new combined microscopy platform is applied to the characterization of a DOPC/DOPS (4:1) lipid structures labelled with a lipophilic cationic indocarbocyanine dye deposited on a mica substrate.« less

  15. KSC-06pd0181

    NASA Image and Video Library

    2006-01-17

    VANDENBERG AIR FORCE BASE, Calif. — At Vandenberg Air Force Base in California, workers are moving the Space Technology 5 (ST5) spacecraft into Orbital Sciences’ Building 1555. There it will be mated with the Pegasus XL launch vehicle. ST5 will be launched by a Pegasus XL rocket. The satellites contain miniaturized redundant components and technologies. Each will validate New Millennium Program selected technologies, such as the Cold Gas Micro-Thruster and X-Band Transponder Communication System. After deployment from the Pegasus, the micro-satellites will be positioned in a “string of pearls” constellation that demonstrates the ability to position them to perform simultaneous multi-point measurements of the magnetic field using highly sensitive magnetometers. The data will help scientists understand and map the intensity and direction of the Earth’s magnetic field, its relation to space weather events, and affects on our planet. With such missions, NASA hopes to improve scientists’ ability to accurately forecast space weather and minimize its harmful effects on space- and ground-based systems. Launch of ST5 is scheduled for Feb. 28 from Vandenberg Air Force Base.

  16. KSC-06pd0445

    NASA Image and Video Library

    2006-02-14

    VANDENBERG AIR FORCE BASE, CALIF. - Inside Orbital Sciences’ Building 1555 at Vandenberg Air Force Base in California, workers clean and prepare the fairing to be installed around the Space Technology 5 (ST5) spacecraft. The ST5 contains three microsatellites with miniaturized redundant components and technologies. Each will validate New Millennium Program selected technologies, such as the Cold Gas Micro-Thruster and X-Band Transponder Communication System. After deployment from the Pegasus, the micro-satellites will be positioned in a “string of pearls” constellation that demonstrates the ability to position them to perform simultaneous multi-point measurements of the magnetic field using highly sensitive magnetometers. The data will help scientists understand and map the intensity and direction of the Earth’s magnetic field, its relation to space weather events, and affects on our planet. With such missions, NASA hopes to improve scientists’ ability to accurately forecast space weather and minimize its harmful effects on space- and ground-based systems. Launch of ST5 is scheduled from the belly of an L-1011 carrier aircraft no earlier than March 14 from Vandenberg Air Force Base.

  17. KSC-06pd0438

    NASA Image and Video Library

    2006-02-14

    VANDENBERG AIR FORCE BASE, CALIF. - Inside Orbital Sciences’ Building 1555 at Vandenberg Air Force Base in California, workers check the Orbital Sciences' Pegasus XL launch vehicle before encapsulation of the Space Technology 5 (ST5) spacecraft. The ST5 contains three microsatellites with miniaturized redundant components and technologies. Each will validate New Millennium Program selected technologies, such as the Cold Gas Micro-Thruster and X-Band Transponder Communication System. After deployment from the Pegasus, the micro-satellites will be positioned in a “string of pearls” constellation that demonstrates the ability to position them to perform simultaneous multi-point measurements of the magnetic field using highly sensitive magnetometers. The data will help scientists understand and map the intensity and direction of the Earth’s magnetic field, its relation to space weather events, and affects on our planet. With such missions, NASA hopes to improve scientists’ ability to accurately forecast space weather and minimize its harmful effects on space- and ground-based systems. Launch of ST5 is scheduled from the belly of an L-1011 carrier aircraft no earlier than March 14 from Vandenberg Air Force Base.

  18. KSC-06pd0186

    NASA Image and Video Library

    2006-01-18

    VANDENBERG AIR FORCE BASE, Calif. — Inside Orbital Sciences’ Building 1555 at Vandenberg Air Force Base in California, the wrapped Space Technology 5 (ST5) spacecraft is revealed after removal of the shipping container. ST5 will be launched by a Pegasus XL rocket. The satellites contain miniaturized redundant components and technologies. Each will validate New Millennium Program selected technologies, such as the Cold Gas Micro-Thruster and X-Band Transponder Communication System. After deployment from the Pegasus, the micro-satellites will be positioned in a “string of pearls” constellation that demonstrates the ability to position them to perform simultaneous multi-point measurements of the magnetic field using highly sensitive magnetometers. The data will help scientists understand and map the intensity and direction of the Earth’s magnetic field, its relation to space weather events, and affects on our planet. With such missions, NASA hopes to improve scientists’ ability to accurately forecast space weather and minimize its harmful effects on space- and ground-based systems. Launch of ST5 is scheduled for Feb. 28 from Vandenberg Air Force Base.

  19. KSC-06pd0437

    NASA Image and Video Library

    2006-02-14

    VANDENBERG AIR FORCE BASE, CALIF. -Inside Orbital Sciences’ Building 1555 at Vandenberg Air Force Base in California, a worker checks connections on the Space Technology 5 (ST5) spacecraft before encapsulation with the fairing. The ST5, mated to Orbital Sciences' Pegasus XL launch vehicle, contains three microsatellites with miniaturized redundant components and technologies. Each will validate New Millennium Program selected technologies, such as the Cold Gas Micro-Thruster and X-Band Transponder Communication System. After deployment from the Pegasus, the micro-satellites will be positioned in a “string of pearls” constellation that demonstrates the ability to position them to perform simultaneous multi-point measurements of the magnetic field using highly sensitive magnetometers. The data will help scientists understand and map the intensity and direction of the Earth’s magnetic field, its relation to space weather events, and affects on our planet. With such missions, NASA hopes to improve scientists’ ability to accurately forecast space weather and minimize its harmful effects on space- and ground-based systems. Launch of ST5 is scheduled from the belly of an L-1011 carrier aircraft no earlier than March 14 from Vandenberg Air Force Base.

  20. KSC-06pd0172

    NASA Image and Video Library

    2006-01-13

    VANDENBERG AIR FORCE BASE, Calif. — In the Orbital Sciences Building 836 at Vandenberg Air Force Base in California, the three micro-satellites comprising the Space Technology 5 spacecraft are mated and ready for weighing. ST5 will be launched by a Pegasus XL rocket. The satellites contain miniaturized redundant components and technologies. Each will validate New Millennium Program selected technologies, such as the Cold Gas Micro-Thruster and X-Band Transponder Communication System. After deployment from the Pegasus, the micro-satellites will be positioned in a “string of pearls” constellation that demonstrates the ability to position them to perform simultaneous multi-point measurements of the magnetic field using highly sensitive magnetometers. The data will help scientists understand and map the intensity and direction of the Earth’s magnetic field, its relation to space weather events, and affects on our planet. With such missions, NASA hopes to improve scientists’ ability to accurately forecast space weather and minimize its harmful effects on space- and ground-based systems. Launch of ST5 is scheduled for Feb. 28 from Vandenberg Air Force Base.

Top