Sample records for weakly informative priors

  1. Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.

    PubMed

    Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R

    2013-03-01

    Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.

  2. Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors

    PubMed Central

    Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.

    2013-01-01

    Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241

  3. Mendelian randomization with Egger pleiotropy correction and weakly informative Bayesian priors.

    PubMed

    Schmidt, A F; Dudbridge, F

    2017-12-15

    The MR-Egger (MRE) estimator has been proposed to correct for directional pleiotropic effects of genetic instruments in an instrumental variable (IV) analysis. The power of this method is considerably lower than that of conventional estimators, limiting its applicability. Here we propose a novel Bayesian implementation of the MR-Egger estimator (BMRE) and explore the utility of applying weakly informative priors on the intercept term (the pleiotropy estimate) to increase power of the IV (slope) estimate. This was a simulation study to compare the performance of different IV estimators. Scenarios differed in the presence of a causal effect, the presence of pleiotropy, the proportion of pleiotropic instruments and degree of 'Instrument Strength Independent of Direct Effect' (InSIDE) assumption violation. Based on empirical plasma urate data, we present an approach to elucidate a prior distribution for the amount of pleiotropy. A weakly informative prior on the intercept term increased power of the slope estimate while maintaining type 1 error rates close to the nominal value of 0.05. Under the InSIDE assumption, performance was unaffected by the presence or absence of pleiotropy. Violation of the InSIDE assumption biased all estimators, affecting the BMRE more than the MRE method. Depending on the prior distribution, the BMRE estimator has more power at the cost of an increased susceptibility to InSIDE assumption violations. As such the BMRE method is a compromise between the MRE and conventional IV estimators, and may be an especially useful approach to account for observed pleiotropy. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association.

  4. Minimally Informative Prior Distributions for PSA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana L. Kelly; Robert W. Youngblood; Kurt G. Vedros

    2010-06-01

    A salient feature of Bayesian inference is its ability to incorporate information from a variety of sources into the inference model, via the prior distribution (hereafter simply “the prior”). However, over-reliance on old information can lead to priors that dominate new data. Some analysts seek to avoid this by trying to work with a minimally informative prior distribution. Another reason for choosing a minimally informative prior is to avoid the often-voiced criticism of subjectivity in the choice of prior. Minimally informative priors fall into two broad classes: 1) so-called noninformative priors, which attempt to be completely objective, in that themore » posterior distribution is determined as completely as possible by the observed data, the most well known example in this class being the Jeffreys prior, and 2) priors that are diffuse over the region where the likelihood function is nonnegligible, but that incorporate some information about the parameters being estimated, such as a mean value. In this paper, we compare four approaches in the second class, with respect to their practical implications for Bayesian inference in Probabilistic Safety Assessment (PSA). The most commonly used such prior, the so-called constrained noninformative prior, is a special case of the maximum entropy prior. This is formulated as a conjugate distribution for the most commonly encountered aleatory models in PSA, and is correspondingly mathematically convenient; however, it has a relatively light tail and this can cause the posterior mean to be overly influenced by the prior in updates with sparse data. A more informative prior that is capable, in principle, of dealing more effectively with sparse data is a mixture of conjugate priors. A particular diffuse nonconjugate prior, the logistic-normal, is shown to behave similarly for some purposes. Finally, we review the so-called robust prior. Rather than relying on the mathematical abstraction of entropy, as does the

  5. Determining informative priors for cognitive models.

    PubMed

    Lee, Michael D; Vanpaemel, Wolf

    2018-02-01

    The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.

  6. Addressing potential prior-data conflict when using informative priors in proof-of-concept studies.

    PubMed

    Mutsvari, Timothy; Tytgat, Dominique; Walley, Rosalind

    2016-01-01

    Bayesian methods are increasingly used in proof-of-concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior-data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior-data conflict by comparing the observed data to the prior predictive distribution and resorting to a non-informative prior if prior-data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one-component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior-credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.

  7. [Patient information prior to sterilization].

    PubMed

    Rasmussen, O V; Henriksen, L O; Baldur, B; Hansen, T

    1992-09-14

    The law in Denmark prescribes that the patient and the general practitioner to whom the patient directs his or her request for sterilization are obliged to confirm by their signatures that the patient has received information about sterilization, its risk and consequences. We asked 97 men and 96 women, if they had received this information prior to their sterilization. They were also asked about their knowledge about sterilization. 54% of the women and 35% of the men indicated that they had not received information. Only few of these wished further information by the hospital doctor. Knowledge about sterilization was good. It is concluded that the information to the patient prior to sterilization is far from optimal. The patients' signature confirming verbal information is not a sufficient safeguard. We recommend, among other things, that the patient should receive written information and that both the general practitioner and the hospital responsible for the operation should ensure that optimal information is received by the patient.

  8. Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.

    PubMed

    Jiang, Yuan; He, Yunxiao; Zhang, Heping

    LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.

  9. Network inference using informative priors

    PubMed Central

    Mukherjee, Sach; Speed, Terence P.

    2008-01-01

    Recent years have seen much interest in the study of systems characterized by multiple interacting components. A class of statistical models called graphical models, in which graphs are used to represent probabilistic relationships between variables, provides a framework for formal inference regarding such systems. In many settings, the object of inference is the network structure itself. This problem of “network inference” is well known to be a challenging one. However, in scientific settings there is very often existing information regarding network connectivity. A natural idea then is to take account of such information during inference. This article addresses the question of incorporating prior information into network inference. We focus on directed models called Bayesian networks, and use Markov chain Monte Carlo to draw samples from posterior distributions over network structures. We introduce prior distributions on graphs capable of capturing information regarding network features including edges, classes of edges, degree distributions, and sparsity. We illustrate our approach in the context of systems biology, applying our methods to network inference in cancer signaling. PMID:18799736

  10. Network inference using informative priors.

    PubMed

    Mukherjee, Sach; Speed, Terence P

    2008-09-23

    Recent years have seen much interest in the study of systems characterized by multiple interacting components. A class of statistical models called graphical models, in which graphs are used to represent probabilistic relationships between variables, provides a framework for formal inference regarding such systems. In many settings, the object of inference is the network structure itself. This problem of "network inference" is well known to be a challenging one. However, in scientific settings there is very often existing information regarding network connectivity. A natural idea then is to take account of such information during inference. This article addresses the question of incorporating prior information into network inference. We focus on directed models called Bayesian networks, and use Markov chain Monte Carlo to draw samples from posterior distributions over network structures. We introduce prior distributions on graphs capable of capturing information regarding network features including edges, classes of edges, degree distributions, and sparsity. We illustrate our approach in the context of systems biology, applying our methods to network inference in cancer signaling.

  11. Integrating prior information into microwave tomography part 2: Impact of errors in prior information on microwave tomography image quality.

    PubMed

    Kurrant, Douglas; Fear, Elise; Baran, Anastasia; LoVetri, Joe

    2017-12-01

    The authors have developed a method to combine a patient-specific map of tissue structure and average dielectric properties with microwave tomography. The patient-specific map is acquired with radar-based techniques and serves as prior information for microwave tomography. The impact that the degree of structural detail included in this prior information has on image quality was reported in a previous investigation. The aim of the present study is to extend this previous work by identifying and quantifying the impact that errors in the prior information have on image quality, including the reconstruction of internal structures and lesions embedded in fibroglandular tissue. This study also extends the work of others reported in literature by emulating a clinical setting with a set of experiments that incorporate heterogeneity into both the breast interior and glandular region, as well as prior information related to both fat and glandular structures. Patient-specific structural information is acquired using radar-based methods that form a regional map of the breast. Errors are introduced to create a discrepancy in the geometry and electrical properties between the regional map and the model used to generate the data. This permits the impact that errors in the prior information have on image quality to be evaluated. Image quality is quantitatively assessed by measuring the ability of the algorithm to reconstruct both internal structures and lesions embedded in fibroglandular tissue. The study is conducted using both 2D and 3D numerical breast models constructed from MRI scans. The reconstruction results demonstrate robustness of the method relative to errors in the dielectric properties of the background regional map, and to misalignment errors. These errors do not significantly influence the reconstruction accuracy of the underlying structures, or the ability of the algorithm to reconstruct malignant tissue. Although misalignment errors do not significantly impact

  12. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel

  13. Information-reality complementarity in photonic weak measurements

    NASA Astrophysics Data System (ADS)

    Mancino, Luca; Sbroscia, Marco; Roccia, Emanuele; Gianani, Ilaria; Cimini, Valeria; Paternostro, Mauro; Barbieri, Marco

    2018-06-01

    The emergence of realistic properties is a key problem in understanding the quantum-to-classical transition. In this respect, measurements represent a way to interface quantum systems with the macroscopic world: these can be driven in the weak regime, where a reduced back-action can be imparted by choosing meter states able to extract different amounts of information. Here we explore the implications of such weak measurement for the variation of realistic properties of two-level quantum systems pre- and postmeasurement, and extend our investigations to the case of open systems implementing the measurements.

  14. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.

    PubMed

    Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.

  15. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors

    PubMed Central

    Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437

  16. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment

  17. Superposing pure quantum states with partial prior information

    NASA Astrophysics Data System (ADS)

    Dogra, Shruti; Thomas, George; Ghosh, Sibasish; Suter, Dieter

    2018-05-01

    The principle of superposition is an intriguing feature of quantum mechanics, which is regularly exploited in many different circumstances. A recent work [M. Oszmaniec et al., Phys. Rev. Lett. 116, 110403 (2016), 10.1103/PhysRevLett.116.110403] shows that the fundamentals of quantum mechanics restrict the process of superimposing two unknown pure states, even though it is possible to superimpose two quantum states with partial prior knowledge. The prior knowledge imposes geometrical constraints on the choice of input states. We discuss an experimentally feasible protocol to superimpose multiple pure states of a d -dimensional quantum system and carry out an explicit experimental realization for two single-qubit pure states with partial prior information on a two-qubit NMR quantum information processor.

  18. Quantum Counterfactual Information Transmission Without a Weak Trace

    NASA Astrophysics Data System (ADS)

    Arvidsson Shukur, David; Barnes, Crispin

    The classical theories of communication rely on the assumption that there has to be a flow of particles from Bob to Alice in order for him to send a message to her. We have developed a quantum protocol that allows Alice to perceive Bob's message ``counterfactually''. That is, without Alice receiving any particles that have interacted with Bob. By utilising a setup built on results from interaction-free measurements and the quantum Zeno effect, we outline a communication protocol in which the information travels in the opposite direction of the emitted particles. In comparison to previous attempts on such protocols, this one is such that a weak measurement at the message source would not leave a weak trace that could be detected by Alice's receiver. Whilst some interaction-free schemes require a large number of carefully aligned beam-splitters, our protocol is realisable with two or more beam-splitters. Furthermore, we outline how Alice's obtained classical Fisher information between a weak variable at Bob's laboratory is negligible in our scheme. We demonstrate this protocol by numerically solving the time-dependent Schrödinger Equation (TDSE) for a Hamiltonian that implements this quantum counterfactual phenomenon.

  19. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  20. A probability tracking approach to segmentation of ultrasound prostate images using weak shape priors

    NASA Astrophysics Data System (ADS)

    Xu, Robert S.; Michailovich, Oleg V.; Solovey, Igor; Salama, Magdy M. A.

    2010-03-01

    Prostate specific antigen density is an established parameter for indicating the likelihood of prostate cancer. To this end, the size and volume of the gland have become pivotal quantities used by clinicians during the standard cancer screening process. As an alternative to manual palpation, an increasing number of volume estimation methods are based on the imagery data of the prostate. The necessity to process large volumes of such data requires automatic segmentation algorithms, which can accurately and reliably identify the true prostate region. In particular, transrectal ultrasound (TRUS) imaging has become a standard means of assessing the prostate due to its safe nature and high benefit-to-cost ratio. Unfortunately, modern TRUS images are still plagued by many ultrasound imaging artifacts such as speckle noise and shadowing, which results in relatively low contrast and reduced SNR of the acquired images. Consequently, many modern segmentation methods incorporate prior knowledge about the prostate geometry to enhance traditional segmentation techniques. In this paper, a novel approach to the problem of TRUS segmentation, particularly the definition of the prostate shape prior, is presented. The proposed approach is based on the concept of distribution tracking, which provides a unified framework for tracking both photometric and morphological features of the prostate. In particular, the tracking of morphological features defines a novel type of "weak" shape priors. The latter acts as a regularization force, which minimally bias the segmentation procedure, while rendering the final estimate stable and robust. The value of the proposed methodology is demonstrated in a series of experiments.

  1. Investigating different approaches to develop informative priors in hierarchical Bayesian safety performance functions.

    PubMed

    Yu, Rongjie; Abdel-Aty, Mohamed

    2013-07-01

    The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Hippocampus segmentation using locally weighted prior based level set

    NASA Astrophysics Data System (ADS)

    Achuthan, Anusha; Rajeswari, Mandava

    2015-12-01

    Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its' imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.

  3. Effects of prior information on decoding degraded speech: an fMRI study.

    PubMed

    Clos, Mareike; Langner, Robert; Meyer, Martin; Oechslin, Mathias S; Zilles, Karl; Eickhoff, Simon B

    2014-01-01

    Expectations and prior knowledge are thought to support the perceptual analysis of incoming sensory stimuli, as proposed by the predictive-coding framework. The current fMRI study investigated the effect of prior information on brain activity during the decoding of degraded speech stimuli. When prior information enabled the comprehension of the degraded sentences, the left middle temporal gyrus and the left angular gyrus were activated, highlighting a role of these areas in meaning extraction. In contrast, the activation of the left inferior frontal gyrus (area 44/45) appeared to reflect the search for meaningful information in degraded speech material that could not be decoded because of mismatches with the prior information. Our results show that degraded sentences evoke instantaneously different percepts and activation patterns depending on the type of prior information, in line with prediction-based accounts of perception. Copyright © 2012 Wiley Periodicals, Inc.

  4. Integrating Informative Priors from Experimental Research with Bayesian Methods

    PubMed Central

    Hamra, Ghassan; Richardson, David; MacLehose, Richard; Wing, Steve

    2013-01-01

    Informative priors can be a useful tool for epidemiologists to handle problems of sparse data in regression modeling. It is sometimes the case that an investigator is studying a population exposed to two agents, X and Y, where Y is the agent of primary interest. Previous research may suggest that the exposures have different effects on the health outcome of interest, one being more harmful than the other. Such information may be derived from epidemiologic analyses; however, in the case where such evidence is unavailable, knowledge can be drawn from toxicologic studies or other experimental research. Unfortunately, using toxicologic findings to develop informative priors in epidemiologic analyses requires strong assumptions, with no established method for its utilization. We present a method to help bridge the gap between animal and cellular studies and epidemiologic research by specification of an order-constrained prior. We illustrate this approach using an example from radiation epidemiology. PMID:23222512

  5. Incorporation of prior information on parameters into nonlinear regression groundwater flow models: 1. Theory

    USGS Publications Warehouse

    Cooley, Richard L.

    1982-01-01

    Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.

  6. A Method for Constructing Informative Priors for Bayesian Modeling of Occupational Hygiene Data.

    PubMed

    Quick, Harrison; Huynh, Tran; Ramachandran, Gurumurthy

    2017-01-01

    In many occupational hygiene settings, the demand for more accurate, more precise results is at odds with limited resources. To combat this, practitioners have begun using Bayesian methods to incorporate prior information into their statistical models in order to obtain more refined inference from their data. This is not without risk, however, as incorporating prior information that disagrees with the information contained in data can lead to spurious conclusions, particularly if the prior is too informative. In this article, we propose a method for constructing informative prior distributions for normal and lognormal data that are intuitive to specify and robust to bias. To demonstrate the use of these priors, we walk practitioners through a step-by-step implementation of our priors using an illustrative example. We then conclude with recommendations for general use. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  7. Bayesian hierarchical functional data analysis via contaminated informative priors.

    PubMed

    Scarpa, Bruno; Dunson, David B

    2009-09-01

    A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.

  8. Non-Gaussian information from weak lensing data via deep learning

    NASA Astrophysics Data System (ADS)

    Gupta, Arushi; Matilla, José Manuel Zorrilla; Hsu, Daniel; Haiman, Zoltán

    2018-05-01

    Weak lensing maps contain information beyond two-point statistics on small scales. Much recent work has tried to extract this information through a range of different observables or via nonlinear transformations of the lensing field. Here we train and apply a two-dimensional convolutional neural network to simulated noiseless lensing maps covering 96 different cosmological models over a range of {Ωm,σ8} . Using the area of the confidence contour in the {Ωm,σ8} plane as a figure of merit, derived from simulated convergence maps smoothed on a scale of 1.0 arcmin, we show that the neural network yields ≈5 × tighter constraints than the power spectrum, and ≈4 × tighter than the lensing peaks. Such gains illustrate the extent to which weak lensing data encode cosmological information not accessible to the power spectrum or even other, non-Gaussian statistics such as lensing peaks.

  9. Modeling and validating Bayesian accrual models on clinical data and simulations using adaptive priors.

    PubMed

    Jiang, Yu; Simon, Steve; Mayo, Matthew S; Gajewski, Byron J

    2015-02-20

    Slow recruitment in clinical trials leads to increased costs and resource utilization, which includes both the clinic staff and patient volunteers. Careful planning and monitoring of the accrual process can prevent the unnecessary loss of these resources. We propose two hierarchical extensions to the existing Bayesian constant accrual model: the accelerated prior and the hedging prior. The new proposed priors are able to adaptively utilize the researcher's previous experience and current accrual data to produce the estimation of trial completion time. The performance of these models, including prediction precision, coverage probability, and correct decision-making ability, is evaluated using actual studies from our cancer center and simulation. The results showed that a constant accrual model with strongly informative priors is very accurate when accrual is on target or slightly off, producing smaller mean squared error, high percentage of coverage, and a high number of correct decisions as to whether or not continue the trial, but it is strongly biased when off target. Flat or weakly informative priors provide protection against an off target prior but are less efficient when the accrual is on target. The accelerated prior performs similar to a strong prior. The hedging prior performs much like the weak priors when the accrual is extremely off target but closer to the strong priors when the accrual is on target or only slightly off target. We suggest improvements in these models and propose new models for future research. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Dissecting effects of complex mixtures: who's afraid of informative priors?

    PubMed

    Thomas, Duncan C; Witte, John S; Greenland, Sander

    2007-03-01

    Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.

  11. Application of Bayesian informative priors to enhance the transferability of safety performance functions.

    PubMed

    Farid, Ahmed; Abdel-Aty, Mohamed; Lee, Jaeyoung; Eluru, Naveen

    2017-09-01

    Safety performance functions (SPFs) are essential tools for highway agencies to predict crashes, identify hotspots and assess safety countermeasures. In the Highway Safety Manual (HSM), a variety of SPFs are provided for different types of roadway facilities, crash types and severity levels. Agencies, lacking the necessary resources to develop own localized SPFs, may opt to apply the HSM's SPFs for their jurisdictions. Yet, municipalities that want to develop and maintain their regional SPFs might encounter the issue of the small sample bias. Bayesian inference is being conducted to address this issue by combining the current data with prior information to achieve reliable results. It follows that the essence of Bayesian statistics is the application of informative priors, obtained from other SPFs or experts' experiences. In this study, we investigate the applicability of informative priors for Bayesian negative binomial SPFs for rural divided multilane highway segments in Florida and California. An SPF with non-informative priors is developed for each state and its parameters' distributions are assigned to the other state's SPF as informative priors. The performances of SPFs are evaluated by applying each state's SPFs to the other state. The analysis is conducted for both total (KABCO) and severe (KAB) crashes. As per the results, applying one state's SPF with informative priors, which are the other state's SPF independent variable estimates, to the latter state's conditions yields better goodness of fit (GOF) values than applying the former state's SPF with non-informative priors to the conditions of the latter state. This is for both total and severe crash SPFs. Hence, for localities where it is not preferred to develop own localized SPFs and adopt SPFs from elsewhere to cut down on resources, application of informative priors is shown to facilitate the process. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  12. How Judgments Change Following Comparison of Current and Prior Information

    PubMed Central

    Albarracin, Dolores; Wallace, Harry M.; Hart, William; Brown, Rick D.

    2013-01-01

    Although much observed judgment change is superficial and occurs without considering prior information, other forms of change also occur. Comparison between prior and new information about an issue may trigger change by influencing either or both the perceived strength and direction of the new information. In four experiments, participants formed and reported initial judgments of a policy based on favorable written information about it. Later, these participants read a second passage containing strong favorable or unfavorable information on the policy. Compared to control conditions, subtle and direct prompts to compare the initial and new information led to more judgment change in the direction of a second passage perceived to be strong. Mediation analyses indicated that comparison yielded greater perceived strength of the second passage, which in turn correlated positively with judgment change. Moreover, self-reports of comparison mediated the judgment change resulting from comparison prompts. PMID:23599557

  13. Lateral orbitofrontal cortex anticipates choices and integrates prior with current information

    PubMed Central

    Nogueira, Ramon; Abolafia, Juan M.; Drugowitsch, Jan; Balaguer-Ballester, Emili; Sanchez-Vives, Maria V.; Moreno-Bote, Rubén

    2017-01-01

    Adaptive behavior requires integrating prior with current information to anticipate upcoming events. Brain structures related to this computation should bring relevant signals from the recent past into the present. Here we report that rats can integrate the most recent prior information with sensory information, thereby improving behavior on a perceptual decision-making task with outcome-dependent past trial history. We find that anticipatory signals in the orbitofrontal cortex about upcoming choice increase over time and are even present before stimulus onset. These neuronal signals also represent the stimulus and relevant second-order combinations of past state variables. The encoding of choice, stimulus and second-order past state variables resides, up to movement onset, in overlapping populations. The neuronal representation of choice before stimulus onset and its build-up once the stimulus is presented suggest that orbitofrontal cortex plays a role in transforming immediate prior and stimulus information into choices using a compact state-space representation. PMID:28337990

  14. Adaptive allocation for binary outcomes using decreasingly informative priors.

    PubMed

    Sabo, Roy T

    2014-01-01

    A method of outcome-adaptive allocation is presented using Bayes methods, where a natural lead-in is incorporated through the use of informative yet skeptical prior distributions for each treatment group. These prior distributions are modeled on unobserved data in such a way that their influence on the allocation scheme decreases as the trial progresses. Simulation studies show this method to behave comparably to the Bayesian adaptive allocation method described by Thall and Wathen (2007), who incorporate a natural lead-in through sample-size-based exponents.

  15. Adipose Gene Expression Prior to Weight Loss Can Differentiate and Weakly Predict Dietary Responders

    PubMed Central

    Mutch, David M.; Temanni, M. Ramzi; Henegar, Corneliu; Combes, Florence; Pelloux, Véronique; Holst, Claus; Sørensen, Thorkild I. A.; Astrup, Arne; Martinez, J. Alfredo; Saris, Wim H. M.; Viguerie, Nathalie; Langin, Dominique; Zucker, Jean-Daniel; Clément, Karine

    2007-01-01

    Background The ability to identify obese individuals who will successfully lose weight in response to dietary intervention will revolutionize disease management. Therefore, we asked whether it is possible to identify subjects who will lose weight during dietary intervention using only a single gene expression snapshot. Methodology/Principal Findings The present study involved 54 female subjects from the Nutrient-Gene Interactions in Human Obesity-Implications for Dietary Guidelines (NUGENOB) trial to determine whether subcutaneous adipose tissue gene expression could be used to predict weight loss prior to the 10-week consumption of a low-fat hypocaloric diet. Using several statistical tests revealed that the gene expression profiles of responders (8–12 kgs weight loss) could always be differentiated from non-responders (<4 kgs weight loss). We also assessed whether this differentiation was sufficient for prediction. Using a bottom-up (i.e. black-box) approach, standard class prediction algorithms were able to predict dietary responders with up to 61.1%±8.1% accuracy. Using a top-down approach (i.e. using differentially expressed genes to build a classifier) improved prediction accuracy to 80.9%±2.2%. Conclusion Adipose gene expression profiling prior to the consumption of a low-fat diet is able to differentiate responders from non-responders as well as serve as a weak predictor of subjects destined to lose weight. While the degree of prediction accuracy currently achieved with a gene expression snapshot is perhaps insufficient for clinical use, this work reveals that the comprehensive molecular signature of adipose tissue paves the way for the future of personalized nutrition. PMID:18094752

  16. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy

    PubMed Central

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867

  17. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.

    PubMed

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.

  18. Improving semantic scene understanding using prior information

    NASA Astrophysics Data System (ADS)

    Laddha, Ankit; Hebert, Martial

    2016-05-01

    Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.

  19. Influence of prior information on pain involves biased perceptual decision-making.

    PubMed

    Wiech, Katja; Vandekerckhove, Joachim; Zaman, Jonas; Tuerlinckx, Francis; Vlaeyen, Johan W S; Tracey, Irene

    2014-08-04

    Prior information about features of a stimulus is a strong modulator of perception. For instance, the prospect of more intense pain leads to an increased perception of pain, whereas the expectation of analgesia reduces pain, as shown in placebo analgesia and expectancy modulations during drug administration. This influence is commonly assumed to be rooted in altered sensory processing and expectancy-related modulations in the spinal cord, are often taken as evidence for this notion. Contemporary models of perception, however, suggest that prior information can also modulate perception by biasing perceptual decision-making - the inferential process underlying perception in which prior information is used to interpret sensory information. In this type of bias, the information is already present in the system before the stimulus is observed. Computational models can distinguish between changes in sensory processing and altered decision-making as they result in different response times for incorrect choices in a perceptual decision-making task (Figure S1A,B). Using a drift-diffusion model, we investigated the influence of both processes in two independent experiments. The results of both experiments strongly suggest that these changes in pain perception are predominantly based on altered perceptual decision-making. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  20. 40 CFR 60.2953 - What information must I submit prior to initial startup?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... and Reporting § 60.2953 What information must I submit prior to initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s...

  1. 40 CFR 60.2195 - What information must I submit prior to initial startup?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... What information must I submit prior to initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned...

  2. 40 CFR 60.2953 - What information must I submit prior to initial startup?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... and Reporting § 60.2953 What information must I submit prior to initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s...

  3. 40 CFR 60.2195 - What information must I submit prior to initial startup?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... What information must I submit prior to initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned...

  4. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  5. Informative priors on fetal fraction increase power of the noninvasive prenatal screen.

    PubMed

    Xu, Hanli; Wang, Shaowei; Ma, Lin-Lin; Huang, Shuai; Liang, Lin; Liu, Qian; Liu, Yang-Yang; Liu, Ke-Di; Tan, Ze-Min; Ban, Hao; Guan, Yongtao; Lu, Zuhong

    2017-11-09

    PurposeNoninvasive prenatal screening (NIPS) sequences a mixture of the maternal and fetal cell-free DNA. Fetal trisomy can be detected by examining chromosomal dosages estimated from sequencing reads. The traditional method uses the Z-test, which compares a subject against a set of euploid controls, where the information of fetal fraction is not fully utilized. Here we present a Bayesian method that leverages informative priors on the fetal fraction.MethodOur Bayesian method combines the Z-test likelihood and informative priors of the fetal fraction, which are learned from the sex chromosomes, to compute Bayes factors. Bayesian framework can account for nongenetic risk factors through the prior odds, and our method can report individual positive/negative predictive values.ResultsOur Bayesian method has more power than the Z-test method. We analyzed 3,405 NIPS samples and spotted at least 9 (of 51) possible Z-test false positives.ConclusionBayesian NIPS is more powerful than the Z-test method, is able to account for nongenetic risk factors through prior odds, and can report individual positive/negative predictive values.Genetics in Medicine advance online publication, 9 November 2017; doi:10.1038/gim.2017.186.

  6. A Bayesian model averaging approach with non-informative priors for cost-effectiveness analyses.

    PubMed

    Conigliani, Caterina

    2010-07-20

    We consider the problem of assessing new and existing technologies for their cost-effectiveness in the case where data on both costs and effects are available from a clinical trial, and we address it by means of the cost-effectiveness acceptability curve. The main difficulty in these analyses is that cost data usually exhibit highly skew and heavy-tailed distributions, so that it can be extremely difficult to produce realistic probabilistic models for the underlying population distribution. Here, in order to integrate the uncertainty about the model into the analysis of cost data and into cost-effectiveness analyses, we consider an approach based on Bayesian model averaging (BMA) in the particular case of weak prior informations about the unknown parameters of the different models involved in the procedure. The main consequence of this assumption is that the marginal densities required by BMA are undetermined. However, in accordance with the theory of partial Bayes factors and in particular of fractional Bayes factors, we suggest replacing each marginal density with a ratio of integrals that can be efficiently computed via path sampling. Copyright (c) 2010 John Wiley & Sons, Ltd.

  7. Performance of informative priors skeptical of large treatment effects in clinical trials: A simulation study.

    PubMed

    Pedroza, Claudia; Han, Weilu; Thanh Truong, Van Thi; Green, Charles; Tyson, Jon E

    2018-01-01

    One of the main advantages of Bayesian analyses of clinical trials is their ability to formally incorporate skepticism about large treatment effects through the use of informative priors. We conducted a simulation study to assess the performance of informative normal, Student- t, and beta distributions in estimating relative risk (RR) or odds ratio (OR) for binary outcomes. Simulation scenarios varied the prior standard deviation (SD; level of skepticism of large treatment effects), outcome rate in the control group, true treatment effect, and sample size. We compared the priors with regards to bias, mean squared error (MSE), and coverage of 95% credible intervals. Simulation results show that the prior SD influenced the posterior to a greater degree than the particular distributional form of the prior. For RR, priors with a 95% interval of 0.50-2.0 performed well in terms of bias, MSE, and coverage under most scenarios. For OR, priors with a wider 95% interval of 0.23-4.35 had good performance. We recommend the use of informative priors that exclude implausibly large treatment effects in analyses of clinical trials, particularly for major outcomes such as mortality.

  8. 34 CFR 99.30 - Under what conditions is prior consent required to disclose information?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Under what conditions is prior consent required to disclose information? 99.30 Section 99.30 Education Office of the Secretary, Department of Education FAMILY... Information From Education Records? § 99.30 Under what conditions is prior consent required to disclose...

  9. Incorporation of prior information on parameters into nonlinear regression groundwater flow models: 2. Applications

    USGS Publications Warehouse

    Cooley, Richard L.

    1983-01-01

    This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.

  10. How the prior information shapes couplings in neural fields performing optimal multisensory integration

    NASA Astrophysics Data System (ADS)

    Wang, He; Zhang, Wen-Hao; Wong, K. Y. Michael; Wu, Si

    Extensive studies suggest that the brain integrates multisensory signals in a Bayesian optimal way. However, it remains largely unknown how the sensory reliability and the prior information shape the neural architecture. In this work, we propose a biologically plausible neural field model, which can perform optimal multisensory integration and encode the whole profile of the posterior. Our model is composed of two modules, each for one modality. The crosstalks between the two modules can be carried out through feedforwad cross-links and reciprocal connections. We found that the reciprocal couplings are crucial to optimal multisensory integration in that the reciprocal coupling pattern is shaped by the correlation in the joint prior distribution of the sensory stimuli. A perturbative approach is developed to illustrate the relation between the prior information and features in coupling patterns quantitatively. Our results show that a decentralized architecture based on reciprocal connections is able to accommodate complex correlation structures across modalities and utilize this prior information in optimal multisensory integration. This work is supported by the Research Grants Council of Hong Kong (N_HKUST606/12 and 605813) and National Basic Research Program of China (2014CB846101) and the Natural Science Foundation of China (31261160495).

  11. Incorporating prior information into differential network analysis using non-paranormal graphical models.

    PubMed

    Zhang, Xiao-Fei; Ou-Yang, Le; Yan, Hong

    2017-08-15

    Understanding how gene regulatory networks change under different cellular states is important for revealing insights into network dynamics. Gaussian graphical models, which assume that the data follow a joint normal distribution, have been used recently to infer differential networks. However, the distributions of the omics data are non-normal in general. Furthermore, although much biological knowledge (or prior information) has been accumulated, most existing methods ignore the valuable prior information. Therefore, new statistical methods are needed to relax the normality assumption and make full use of prior information. We propose a new differential network analysis method to address the above challenges. Instead of using Gaussian graphical models, we employ a non-paranormal graphical model that can relax the normality assumption. We develop a principled model to take into account the following prior information: (i) a differential edge less likely exists between two genes that do not participate together in the same pathway; (ii) changes in the networks are driven by certain regulator genes that are perturbed across different cellular states and (iii) the differential networks estimated from multi-view gene expression data likely share common structures. Simulation studies demonstrate that our method outperforms other graphical model-based algorithms. We apply our method to identify the differential networks between platinum-sensitive and platinum-resistant ovarian tumors, and the differential networks between the proneural and mesenchymal subtypes of glioblastoma. Hub nodes in the estimated differential networks rediscover known cancer-related regulator genes and contain interesting predictions. The source code is at https://github.com/Zhangxf-ccnu/pDNA. szuouyl@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors.

    PubMed

    Peterson, Christine; Vannucci, Marina; Karakas, Cemal; Choi, William; Ma, Lihua; Maletić-Savatić, Mirjana

    2013-10-01

    Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation.

  13. Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors

    PubMed Central

    PETERSON, CHRISTINE; VANNUCCI, MARINA; KARAKAS, CEMAL; CHOI, WILLIAM; MA, LIHUA; MALETIĆ-SAVATIĆ, MIRJANA

    2014-01-01

    Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation. PMID:24533172

  14. Assignment of a non-informative prior when using a calibration function

    NASA Astrophysics Data System (ADS)

    Lira, I.; Grientschnig, D.

    2012-01-01

    The evaluation of measurement uncertainty associated with the use of calibration functions was addressed in a talk at the 19th IMEKO World Congress 2009 in Lisbon (Proceedings, pp 2346-51). Therein, an example involving a cubic function was analysed by a Bayesian approach and by the Monte Carlo method described in Supplement 1 to the 'Guide to the Expression of Uncertainty in Measurement'. Results were found to be discrepant. In this paper we examine a simplified version of the example and show that the reported discrepancy is caused by the choice of the prior in the Bayesian analysis, which does not conform to formal rules for encoding the absence of prior knowledge. Two options for assigning a non-informative prior free from this shortcoming are considered; they are shown to be equivalent.

  15. Enhancing QKD security with weak measurements

    NASA Astrophysics Data System (ADS)

    Farinholt, Jacob M.; Troupe, James E.

    2016-10-01

    Publisher's Note: This paper, originally published on 10/24/2016, was replaced with a corrected/revised version on 11/8/2016. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. In the late 1980s, Aharonov and colleagues developed the notion of a weak measurement of a quantum observable that does not appreciably disturb the system.1, 2 The measurement results are conditioned on both the pre-selected and post-selected state of the quantum system. While any one measurement reveals very little information, by making the same measurement on a large ensemble of identically prepared pre- and post-selected (PPS) states and averaging the results, one may obtain what is known as the weak value of the observable with respect to that PPS ensemble. Recently, weak measurements have been proposed as a method of assessing the security of QKD in the well-known BB84 protocol.3 This weak value augmented QKD protocol (WV-QKD) works by additionally requiring the receiver, Bob, to make a weak measurement of a particular observable prior to his strong measurement. For the subset of measurement results in which Alice and Bob's measurement bases do not agree, the weak measurement results can be used to detect any attempt by an eavesdropper, Eve, to correlate her measurement results with Bob's. Furthermore, the well-known detector blinding attacks, which are known to perfectly correlate Eve's results with Bob's without being caught by conventional BB84 implementations, actually make the eavesdropper more visible in the new WV-QKD protocol. In this paper, we will introduce the WV-QKD protocol and discuss its generalization to the 6-state single qubit protocol. We will discuss the types of weak measurements that are optimal for this protocol, and compare the predicted performance of the 6- and 4-state WV-QKD protocols.

  16. Integrating informative priors from experimental research with Bayesian methods: an example from radiation epidemiology.

    PubMed

    Hamra, Ghassan; Richardson, David; Maclehose, Richard; Wing, Steve

    2013-01-01

    Informative priors can be a useful tool for epidemiologists to handle problems of sparse data in regression modeling. It is sometimes the case that an investigator is studying a population exposed to two agents, X and Y, where Y is the agent of primary interest. Previous research may suggest that the exposures have different effects on the health outcome of interest, one being more harmful than the other. Such information may be derived from epidemiologic analyses; however, in the case where such evidence is unavailable, knowledge can be drawn from toxicologic studies or other experimental research. Unfortunately, using toxicologic findings to develop informative priors in epidemiologic analyses requires strong assumptions, with no established method for its utilization. We present a method to help bridge the gap between animal and cellular studies and epidemiologic research by specification of an order-constrained prior. We illustrate this approach using an example from radiation epidemiology.

  17. Recovering information of tunneling spectrum from weakly isolated horizon

    NASA Astrophysics Data System (ADS)

    Chen, Ge-Rui; Huang, Yong-Chang

    2015-02-01

    In this paper we investigate the properties of tunneling spectrum from weakly isolated horizon (WIH)—a locally defined black hole. We find that there exist correlations among Hawking radiations from a WIH, information can be carried out by such correlations, and the radiation is an entropy conservation process. Through revisiting the calculation of the tunneling spectrum from a WIH, we find that Zhang et al.'s (Ann Phys 326:350, 2011) requirement that radiated particles have the same angular momenta of a unit mass as that of the black hole is unnecessary, and the energy and angular momenta of the emitted particles are very arbitrary, restricted only by keeping the cosmic censorship hypothesis of black holes. So we resolve the information loss paradox based on the method of Zhang et al. (Phys Lett B 675:98, 2009; Ann Phys 326:350, 2011; Int J Mod Phys D 22:1341014, 2013) in a general case.

  18. Selected aspects of prior and likelihood information for a Bayesian classifier in a road safety analysis.

    PubMed

    Nowakowska, Marzena

    2017-04-01

    The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. 3D microwave tomography of the breast using prior anatomical information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golnabi, Amir H., E-mail: golnabia@montclair.edu; Meaney, Paul M.; Paulsen, Keith D.

    2016-04-15

    Purpose: The authors have developed a new 3D breast image reconstruction technique that utilizes the soft tissue spatial resolution of magnetic resonance imaging (MRI) and integrates the dielectric property differentiation from microwave imaging to produce a dual modality approach with the goal of augmenting the specificity of MR imaging, possibly without the need for nonspecific contrast agents. The integration is performed through the application of a soft prior regularization which imports segmented geometric meshes generated from MR exams and uses it to constrain the microwave tomography algorithm to recover nearly uniform property distributions within segmented regions with sharp delineation betweenmore » these internal subzones. Methods: Previous investigations have demonstrated that this approach is effective in 2D simulation and phantom experiments and also in clinical exams. The current study extends the algorithm to 3D and provides a thorough analysis of the sensitivity and robustness to misalignment errors in size and location between the spatial prior information and the actual data. Results: Image results in 3D were not strongly dependent on reconstruction mesh density, and the changes of less than 30% in recovered property values arose from variations of more than 125% in target region size—an outcome which was more robust than in 2D. Similarly, changes of less than 13% occurred in the 3D image results from variations in target location of nearly 90% of the inclusion size. Permittivity and conductivity errors were about 5 times and 2 times smaller, respectively, with the 3D spatial prior algorithm in actual phantom experiments than those which occurred without priors. Conclusions: The presented study confirms that the incorporation of structural information in the form of a soft constraint can considerably improve the accuracy of the property estimates in predefined regions of interest. These findings are encouraging and establish a strong

  20. Optimal Multiple Surface Segmentation With Shape and Context Priors

    PubMed Central

    Bai, Junjie; Garvin, Mona K.; Sonka, Milan; Buatti, John M.; Wu, Xiaodong

    2014-01-01

    Segmentation of multiple surfaces in medical images is a challenging problem, further complicated by the frequent presence of weak boundary evidence, large object deformations, and mutual influence between adjacent objects. This paper reports a novel approach to multi-object segmentation that incorporates both shape and context prior knowledge in a 3-D graph-theoretic framework to help overcome the stated challenges. We employ an arc-based graph representation to incorporate a wide spectrum of prior information through pair-wise energy terms. In particular, a shape-prior term is used to penalize local shape changes and a context-prior term is used to penalize local surface-distance changes from a model of the expected shape and surface distances, respectively. The globally optimal solution for multiple surfaces is obtained by computing a maximum flow in a low-order polynomial time. The proposed method was validated on intraretinal layer segmentation of optical coherence tomography images and demonstrated statistically significant improvement of segmentation accuracy compared to our earlier graph-search method that was not utilizing shape and context priors. The mean unsigned surface positioning errors obtained by the conventional graph-search approach (6.30 ± 1.58 μm) was improved to 5.14 ± 0.99 μm when employing our new method with shape and context priors. PMID:23193309

  1. A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.

    PubMed

    Bord, Séverine; Bioche, Christèle; Druilhet, Pierre

    2018-05-01

    We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Filtering genetic variants and placing informative priors based on putative biological function.

    PubMed

    Friedrichs, Stefanie; Malzahn, Dörthe; Pugh, Elizabeth W; Almeida, Marcio; Liu, Xiao Qing; Bailey, Julia N

    2016-02-03

    High-density genetic marker data, especially sequence data, imply an immense multiple testing burden. This can be ameliorated by filtering genetic variants, exploiting or accounting for correlations between variants, jointly testing variants, and by incorporating informative priors. Priors can be based on biological knowledge or predicted variant function, or even be used to integrate gene expression or other omics data. Based on Genetic Analysis Workshop (GAW) 19 data, this article discusses diversity and usefulness of functional variant scores provided, for example, by PolyPhen2, SIFT, or RegulomeDB annotations. Incorporating functional scores into variant filters or weights and adjusting the significance level for correlations between variants yielded significant associations with blood pressure traits in a large family study of Mexican Americans (GAW19 data set). Marker rs218966 in gene PHF14 and rs9836027 in MAP4 significantly associated with hypertension; additionally, rare variants in SNUPN significantly associated with systolic blood pressure. Variant weights strongly influenced the power of kernel methods and burden tests. Apart from variant weights in test statistics, prior weights may also be used when combining test statistics or to informatively weight p values while controlling false discovery rate (FDR). Indeed, power improved when gene expression data for FDR-controlled informative weighting of association test p values of genes was used. Finally, approaches exploiting variant correlations included identity-by-descent mapping and the optimal strategy for joint testing rare and common variants, which was observed to depend on linkage disequilibrium structure.

  3. 78 FR 32359 - Information Required in Prior Notice of Imported Food

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-30

    ... or animal food based on food safety reasons, such as intentional or unintentional contamination of an... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 1 [Docket No. FDA-2011-N-0179] RIN 0910-AG65 Information Required in Prior Notice of Imported Food AGENCY: Food and Drug...

  4. A predictive model of avian natal dispersal distance provides prior information for investigating response to landscape change.

    PubMed

    Garrard, Georgia E; McCarthy, Michael A; Vesk, Peter A; Radford, James Q; Bennett, Andrew F

    2012-01-01

    1. Informative Bayesian priors can improve the precision of estimates in ecological studies or estimate parameters for which little or no information is available. While Bayesian analyses are becoming more popular in ecology, the use of strongly informative priors remains rare, perhaps because examples of informative priors are not readily available in the published literature. 2. Dispersal distance is an important ecological parameter, but is difficult to measure and estimates are scarce. General models that provide informative prior estimates of dispersal distances will therefore be valuable. 3. Using a world-wide data set on birds, we develop a predictive model of median natal dispersal distance that includes body mass, wingspan, sex and feeding guild. This model predicts median dispersal distance well when using the fitted data and an independent test data set, explaining up to 53% of the variation. 4. Using this model, we predict a priori estimates of median dispersal distance for 57 woodland-dependent bird species in northern Victoria, Australia. These estimates are then used to investigate the relationship between dispersal ability and vulnerability to landscape-scale changes in habitat cover and fragmentation. 5. We find evidence that woodland bird species with poor predicted dispersal ability are more vulnerable to habitat fragmentation than those species with longer predicted dispersal distances, thus improving the understanding of this important phenomenon. 6. The value of constructing informative priors from existing information is also demonstrated. When used as informative priors for four example species, predicted dispersal distances reduced the 95% credible intervals of posterior estimates of dispersal distance by 8-19%. Further, should we have wished to collect information on avian dispersal distances and relate it to species' responses to habitat loss and fragmentation, data from 221 individuals across 57 species would have been required to obtain

  5. Clustering and Bayesian hierarchical modeling for the definition of informative prior distributions in hydrogeology

    NASA Astrophysics Data System (ADS)

    Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.

    2017-12-01

    In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.

  6. Corpus callosum segmentation using deep neural networks with prior information from multi-atlas images

    NASA Astrophysics Data System (ADS)

    Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min

    2018-03-01

    In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.

  7. Stress affects the neural ensemble for integrating new information and prior knowledge.

    PubMed

    Vogel, Susanne; Kluen, Lisa Marieke; Fernández, Guillén; Schwabe, Lars

    2018-06-01

    Prior knowledge, represented as a schema, facilitates memory encoding. This schema-related learning is assumed to rely on the medial prefrontal cortex (mPFC) that rapidly integrates new information into the schema, whereas schema-incongruent or novel information is encoded by the hippocampus. Stress is a powerful modulator of prefrontal and hippocampal functioning and first studies suggest a stress-induced deficit of schema-related learning. However, the underlying neural mechanism is currently unknown. To investigate the neural basis of a stress-induced schema-related learning impairment, participants first acquired a schema. One day later, they underwent a stress induction or a control procedure before learning schema-related and novel information in the MRI scanner. In line with previous studies, learning schema-related compared to novel information activated the mPFC, angular gyrus, and precuneus. Stress, however, affected the neural ensemble activated during learning. Whereas the control group distinguished between sets of brain regions for related and novel information, stressed individuals engaged the hippocampus even when a relevant schema was present. Additionally, stressed participants displayed aberrant functional connectivity between brain regions involved in schema processing when encoding novel information. The failure to segregate functional connectivity patterns depending on the presence of prior knowledge was linked to impaired performance after stress. Our results show that stress affects the neural ensemble underlying the efficient use of schemas during learning. These findings may have relevant implications for clinical and educational settings. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Use of Elaborative Interrogation to Help Students Acquire Information Consistent with Prior Knowledge and Information Inconsistent with Prior Knowledge.

    ERIC Educational Resources Information Center

    Woloshyn, Vera E.; And Others

    1994-01-01

    Thirty-two factual statements, half consistent and half not consistent with subjects' prior knowledge, were processed by 140 sixth and seventh graders. Half were directed to use elaborative interrogation (using prior knowledge) to answer why each statement was true. Across all memory measures, elaborative interrogation subjects performed better…

  9. Ten-Month-Old Infants Use Prior Information to Identify an Actor's Goal

    ERIC Educational Resources Information Center

    Sommerville, Jessica A.; Crane, Catharyn C.

    2009-01-01

    For adults, prior information about an individual's likely goals, preferences or dispositions plays a powerful role in interpreting ambiguous behavior and predicting and interpreting behavior in novel contexts. Across two studies, we investigated whether 10-month-old infants' ability to identify the goal of an ambiguous action sequence was…

  10. Adjusting for founder relatedness in a linkage analysis using prior information.

    PubMed

    Sheehan, N A; Egeland, T

    2008-01-01

    In genetic linkage studies, while the pedigrees are generally known, background relatedness between the founding individuals, assumed by definition to be unrelated, can seriously affect the results of the analysis. Likelihood approaches to relationship estimation from genetic marker data can all be expressed in terms of finding the most likely pedigree connecting the individuals of interest. When the true relationship is the main focus, the set of all possible alternative pedigrees can be too large to consider. However, prior information is often available which, when incorporated in a formal and structured way, can restrict this set to a manageable size thus enabling the calculation of a posterior distribution from which inferences can be drawn. Here, the unknown relationships are more of a nuisance factor than of interest in their own right, so the focus is on adjusting the results of the analysis rather than on direct estimation. In this paper, we show how prior information on founder relationships can be exploited in some applications to generate a set of candidate extended pedigrees. We then weight the relevant pedigree-specific likelihoods by their posterior probabilities to adjust the lod score statistics. (c) 2007 S. Karger AG, Basel

  11. Iterative CT shading correction with no prior information

    NASA Astrophysics Data System (ADS)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical

  12. Hierarchical Commensurate and Power Prior Models for Adaptive Incorporation of Historical Information in Clinical Trials

    PubMed Central

    Hobbs, Brian P.; Carlin, Bradley P.; Mandrekar, Sumithra J.; Sargent, Daniel J.

    2011-01-01

    Summary Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected Type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this paper, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen. PMID:21361892

  13. 21 CFR 1.282 - What must you do if information changes after you have received confirmation of a prior notice...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... have received confirmation of a prior notice from FDA? 1.282 Section 1.282 Food and Drugs FOOD AND DRUG... changes after you have received confirmation of a prior notice from FDA? (a)(1) If any of the information... information), changes after you receive notice that FDA has confirmed your prior notice submission for review...

  14. 21 CFR 1.282 - What must you do if information changes after you have received confirmation of a prior notice...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... have received confirmation of a prior notice from FDA? 1.282 Section 1.282 Food and Drugs FOOD AND DRUG... changes after you have received confirmation of a prior notice from FDA? (a)(1) If any of the information... information), changes after you receive notice that FDA has confirmed your prior notice submission for review...

  15. 21 CFR 1.282 - What must you do if information changes after you have received confirmation of a prior notice...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... have received confirmation of a prior notice from FDA? 1.282 Section 1.282 Food and Drugs FOOD AND DRUG... changes after you have received confirmation of a prior notice from FDA? (a)(1) If any of the information... information), changes after you receive notice that FDA has confirmed your prior notice submission for review...

  16. 21 CFR 1.282 - What must you do if information changes after you have received confirmation of a prior notice...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... have received confirmation of a prior notice from FDA? 1.282 Section 1.282 Food and Drugs FOOD AND DRUG... changes after you have received confirmation of a prior notice from FDA? (a)(1) If any of the information... information), changes after you receive notice that FDA has confirmed your prior notice submission for review...

  17. 21 CFR 1.282 - What must you do if information changes after you have received confirmation of a prior notice...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... have received confirmation of a prior notice from FDA? 1.282 Section 1.282 Food and Drugs FOOD AND DRUG... changes after you have received confirmation of a prior notice from FDA? (a)(1) If any of the information... information), changes after you receive notice that FDA has confirmed your prior notice submission for review...

  18. Integrating prior information into microwave tomography Part 1: Impact of detail on image quality.

    PubMed

    Kurrant, Douglas; Baran, Anastasia; LoVetri, Joe; Fear, Elise

    2017-12-01

    The authors investigate the impact that incremental increases in the level of detail of patient-specific prior information have on image quality and the convergence behavior of an inversion algorithm in the context of near-field microwave breast imaging. A methodology is presented that uses image quality measures to characterize the ability of the algorithm to reconstruct both internal structures and lesions embedded in fibroglandular tissue. The approach permits key aspects that impact the quality of reconstruction of these structures to be identified and quantified. This provides insight into opportunities to improve image reconstruction performance. Patient-specific information is acquired using radar-based methods that form a regional map of the breast. This map is then incorporated into a microwave tomography algorithm. Previous investigations have demonstrated the effectiveness of this approach to improve image quality when applied to data generated with two-dimensional (2D) numerical models. The present study extends this work by generating prior information that is customized to vary the degree of structural detail to facilitate the investigation of the role of prior information in image formation. Numerical 2D breast models constructed from magnetic resonance (MR) scans, and reconstructions formed with a three-dimensional (3D) numerical breast model are used to assess if trends observed for the 2D results can be extended to 3D scenarios. For the blind reconstruction scenario (i.e., no prior information), the breast surface is not accurately identified and internal structures are not clearly resolved. A substantial improvement in image quality is achieved by incorporating the skin surface map and constraining the imaging domain to the breast. Internal features within the breast appear in the reconstructed image. However, it is challenging to discriminate between adipose and glandular regions and there are inaccuracies in both the structural properties of

  19. Small-Sample Equating with Prior Information. Research Report. ETS RR-09-25

    ERIC Educational Resources Information Center

    Livingston, Samuel A.; Lewis, Charles

    2009-01-01

    This report proposes an empirical Bayes approach to the problem of equating scores on test forms taken by very small numbers of test takers. The equated score is estimated separately at each score point, making it unnecessary to model either the score distribution or the equating transformation. Prior information comes from equatings of other…

  20. 21 CFR 1.281 - What information must be in a prior notice?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... by truck, bus, or rail, the trip number; (v) For food arriving as containerized cargo by water, air... arrived by truck, bus, or rail, the trip number; (v) For food that arrived as containerized cargo by water... 21 Food and Drugs 1 2010-04-01 2010-04-01 false What information must be in a prior notice? 1.281...

  1. Analytical Calculation of Mutual Information between Weakly Coupled Poisson-Spiking Neurons in Models of Dynamically Gated Communication.

    PubMed

    Cannon, Jonathan

    2017-01-01

    Mutual information is a commonly used measure of communication between neurons, but little theory exists describing the relationship between mutual information and the parameters of the underlying neuronal interaction. Such a theory could help us understand how specific physiological changes affect the capacity of neurons to synaptically communicate, and, in particular, they could help us characterize the mechanisms by which neuronal dynamics gate the flow of information in the brain. Here we study a pair of linear-nonlinear-Poisson neurons coupled by a weak synapse. We derive an analytical expression describing the mutual information between their spike trains in terms of synapse strength, neuronal activation function, the time course of postsynaptic currents, and the time course of the background input received by the two neurons. This expression allows mutual information calculations that would otherwise be computationally intractable. We use this expression to analytically explore the interaction of excitation, information transmission, and the convexity of the activation function. Then, using this expression to quantify mutual information in simulations, we illustrate the information-gating effects of neural oscillations and oscillatory coherence, which may either increase or decrease the mutual information across the synapse depending on parameters. Finally, we show analytically that our results can quantitatively describe the selection of one information pathway over another when multiple sending neurons project weakly to a single receiving neuron.

  2. Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.

    PubMed

    Gajewski, Byron J; Mayo, Matthew S

    2006-08-15

    A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.

  3. State-dependent rotations of spins by weak measurements

    NASA Astrophysics Data System (ADS)

    Miller, D. J.

    2011-03-01

    It is shown that a weak measurement of a quantum system produces a new state of the quantum system which depends on the prior state, as well as the (uncontrollable) measured position of the pointer variable of the weak-measurement apparatus. The result imposes a constraint on hidden-variable theories which assign a different state to a quantum system than standard quantum mechanics. The constraint means that a crypto-nonlocal hidden-variable theory can be ruled out in a more direct way than previously done.

  4. Guided transect sampling - a new design combining prior information and field surveying

    Treesearch

    Anna Ringvall; Goran Stahl; Tomas Lamas

    2000-01-01

    Guided transect sampling is a two-stage sampling design in which prior information is used to guide the field survey in the second stage. In the first stage, broad strips are randomly selected and divided into grid-cells. For each cell a covariate value is estimated from remote sensing data, for example. The covariate is the basis for subsampling of a transect through...

  5. 40 CFR 60.2953 - What information must I submit prior to initial startup?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning...

  6. 40 CFR 60.2195 - What information must I submit prior to initial startup?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 2001 Recordkeeping and Reporting § 60.2195 What information must I submit prior to initial startup? You... startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning capacity. (c) The...

  7. 40 CFR 60.2953 - What information must I submit prior to initial startup?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning...

  8. 40 CFR 60.2953 - What information must I submit prior to initial startup?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning...

  9. 40 CFR 60.2195 - What information must I submit prior to initial startup?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 2001 Recordkeeping and Reporting § 60.2195 What information must I submit prior to initial startup? You... startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning capacity. (c) The...

  10. 40 CFR 60.2195 - What information must I submit prior to initial startup?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 2001 Recordkeeping and Reporting § 60.2195 What information must I submit prior to initial startup? You... startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning capacity. (c) The...

  11. Glimpse: Sparsity based weak lensing mass-mapping tool

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.

    2018-02-01

    Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.

  12. Accommodating Uncertainty in Prior Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picard, Richard Roy; Vander Wiel, Scott Alan

    2017-01-19

    A fundamental premise of Bayesian methodology is that a priori information is accurately summarized by a single, precisely de ned prior distribution. In many cases, especially involving informative priors, this premise is false, and the (mis)application of Bayes methods produces posterior quantities whose apparent precisions are highly misleading. We examine the implications of uncertainty in prior distributions, and present graphical methods for dealing with them.

  13. Systematic effects on dark energy from 3D weak shear

    NASA Astrophysics Data System (ADS)

    Kitching, T. D.; Taylor, A. N.; Heavens, A. F.

    2008-09-01

    We present an investigation into the potential effect of systematics inherent in multiband wide-field surveys on the dark energy equation-of-state determination for two 3D weak lensing methods. The weak lensing methods are a geometric shear-ratio method and 3D cosmic shear. The analysis here uses an extension of the Fisher matrix framework to include jointly photometric redshift systematics, shear distortion systematics and intrinsic alignments. Using analytic parametrizations of these three primary systematic effects allows an isolation of systematic parameters of particular importance. We show that assuming systematic parameters are fixed, but possibly biased, results in potentially large biases in dark energy parameters. We quantify any potential bias by defining a Bias Figure of Merit. By marginalizing over extra systematic parameters, such biases are negated at the expense of an increase in the cosmological parameter errors. We show the effect on the dark energy Figure of Merit of marginalizing over each systematic parameter individually. We also show the overall reduction in the Figure of Merit due to all three types of systematic effects. Based on some assumption of the likely level of systematic errors, we find that the largest effect on the Figure of Merit comes from uncertainty in the photometric redshift systematic parameters. These can reduce the Figure of Merit by up to a factor of 2 to 4 in both 3D weak lensing methods, if no informative prior on the systematic parameters is applied. Shear distortion systematics have a smaller overall effect. Intrinsic alignment effects can reduce the Figure of Merit by up to a further factor of 2. This, however, is a worst-case scenario, within the assumptions of the parametrizations used. By including prior information on systematic parameters, the Figure of Merit can be recovered to a large extent, and combined constraints from 3D cosmic shear and shear ratio are robust to systematics. We conclude that, as a rule

  14. Adaptive power priors with empirical Bayes for clinical trials.

    PubMed

    Gravestock, Isaac; Held, Leonhard

    2017-09-01

    Incorporating historical information into the design and analysis of a new clinical trial has been the subject of much discussion as a way to increase the feasibility of trials in situations where patients are difficult to recruit. The best method to include this data is not yet clear, especially in the case when few historical studies are available. This paper looks at the power prior technique afresh in a binomial setting and examines some previously unexamined properties, such as Box P values, bias, and coverage. Additionally, it proposes an empirical Bayes-type approach to estimating the prior weight parameter by marginal likelihood. This estimate has advantages over previously criticised methods in that it varies commensurably with differences in the historical and current data and can choose weights near 1 when the data are similar enough. Fully Bayesian approaches are also considered. An analysis of the operating characteristics shows that the adaptive methods work well and that the various approaches have different strengths and weaknesses. Copyright © 2017 John Wiley & Sons, Ltd.

  15. A Simple Method for Estimating Informative Node Age Priors for the Fossil Calibration of Molecular Divergence Time Analyses

    PubMed Central

    Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.

    2013-01-01

    Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303

  16. Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors

    ERIC Educational Resources Information Center

    Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen

    2012-01-01

    Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…

  17. Relation Extraction with Weak Supervision and Distributional Semantics

    DTIC Science & Technology

    2013-05-01

    DATES COVERED 00-00-2013 to 00-00-2013 4 . TITLE AND SUBTITLE Relation Extraction with Weak Supervision and Distributional Semantics 5a...ix List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 Introduction 1 2 Prior Work 4 ...2.1 Supervised relation extraction . . . . . . . . . . . . . . . . . . . . . 4 2.2 Distant supervision for relation extraction

  18. Feasibility of Providing Web-Based Information to Breast Cancer Patients Prior to a Surgical Consult.

    PubMed

    Bruce, Jordan G; Tucholka, Jennifer L; Steffens, Nicole M; Mahoney, Jane E; Neuman, Heather B

    2017-03-30

    Patients facing decisions for breast cancer surgery commonly search the internet. Directing patients to high-quality websites prior to the surgeon consultation may be one way of supporting patients' informational needs. The objective was to test an approach for delivering web-based information to breast cancer patients. The implementation strategy was developed using the Replicating Effective Programs framework. Pilot testing measured the proportion that accepted the web-based information. A pre-consultation survey assessed whether the information was reviewed and the acceptability to stakeholders. Reasons for declining guided refinement to the implementation package. Eighty-two percent (309/377) accepted the web-based information. Of the 309 that accepted, 244 completed the pre-consultation survey. Participants were a median 59 years, white (98%), and highly educated (>50% with a college degree). Most patients who completed the questionnaire reported reviewing the website (85%), and nearly all found it helpful. Surgeons thought implementation increased visit efficiency (5/6) and would result in patients making more informed decisions (6/6). The most common reasons patients declined information were limited internet comfort or access (n = 36), emotional distress (n = 14), and preference to receive information directly from the surgeon (n = 7). Routine delivery of web-based information to breast cancer patients prior to the surgeon consultation is feasible. High stakeholder acceptability combined with the low implementation burden means that these findings have immediate relevance for improving care quality.

  19. 78 FR 65670 - Agency Information Collection Activities; Proposed Collection; Comment Request; Prior Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-01

    ... Food Under the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 AGENCY... appropriate, and other forms of information technology. Prior Notice of Imported Food Under the Public Health... 0910-0520)--Revision The Public Health Security and Bioterrorism Preparedness and Response Act of 2002...

  20. Correction of projective distortion in long-image-sequence mosaics without prior information

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is

  1. Informative priors based on transcription factor structural class improve de novo motif discovery.

    PubMed

    Narlikar, Leelavati; Gordân, Raluca; Ohler, Uwe; Hartemink, Alexander J

    2006-07-15

    An important problem in molecular biology is to identify the locations at which a transcription factor (TF) binds to DNA, given a set of DNA sequences believed to be bound by that TF. In previous work, we showed that information in the DNA sequence of a binding site is sufficient to predict the structural class of the TF that binds it. In particular, this suggests that we can predict which locations in any DNA sequence are more likely to be bound by certain classes of TFs than others. Here, we argue that traditional methods for de novo motif finding can be significantly improved by adopting an informative prior probability that a TF binding site occurs at each sequence location. To demonstrate the utility of such an approach, we present priority, a powerful new de novo motif finding algorithm. Using data from TRANSFAC, we train three classifiers to recognize binding sites of basic leucine zipper, forkhead, and basic helix loop helix TFs. These classifiers are used to equip priority with three class-specific priors, in addition to a default prior to handle TFs of other classes. We apply priority and a number of popular motif finding programs to sets of yeast intergenic regions that are reported by ChIP-chip to be bound by particular TFs. priority identifies motifs the other methods fail to identify, and correctly predicts the structural class of the TF recognizing the identified binding sites. Supplementary material and code can be found at http://www.cs.duke.edu/~amink/.

  2. Meteorological variables to aid forecasting deep slab avalanches on persistent weak layers

    USGS Publications Warehouse

    Marienthal, Alex; Hendrikx, Jordy; Birkeland, Karl; Irvine, Kathryn M.

    2015-01-01

    Deep slab avalanches are particularly challenging to forecast. These avalanches are difficult to trigger, yet when they release they tend to propagate far and can result in large and destructive avalanches. We utilized a 44-year record of avalanche control and meteorological data from Bridger Bowl ski area in southwest Montana to test the usefulness of meteorological variables for predicting seasons and days with deep slab avalanches. We defined deep slab avalanches as those that failed on persistent weak layers deeper than 0.9 m, and that occurred after February 1st. Previous studies often used meteorological variables from days prior to avalanches, but we also considered meteorological variables over the early months of the season. We used classification trees and random forests for our analyses. Our results showed seasons with either dry or wet deep slabs on persistent weak layers typically had less precipitation from November through January than seasons without deep slabs on persistent weak layers. Days with deep slab avalanches on persistent weak layers often had warmer minimum 24-hour air temperatures, and more precipitation over the prior seven days, than days without deep slabs on persistent weak layers. Days with deep wet slab avalanches on persistent weak layers were typically preceded by three days of above freezing air temperatures. Seasonal and daily meteorological variables were found useful to aid forecasting dry and wet deep slab avalanches on persistent weak layers, and should be used in combination with continuous observation of the snowpack and avalanche activity.

  3. Information-theoretic measures of hydrogen-like ions in weakly coupled Debye plasmas

    NASA Astrophysics Data System (ADS)

    Zan, Li Rong; Jiao, Li Guang; Ma, Jia; Ho, Yew Kam

    2017-12-01

    Recent development of information theory provides researchers an alternative and useful tool to quantitatively investigate the variation of the electronic structure when atoms interact with the external environment. In this work, we make systematic studies on the information-theoretic measures for hydrogen-like ions immersed in weakly coupled plasmas modeled by Debye-Hückel potential. Shannon entropy, Fisher information, and Fisher-Shannon complexity in both position and momentum spaces are quantified in high accuracy for the hydrogen atom in a large number of stationary states. The plasma screening effect on embedded atoms can significantly affect the electronic density distributions, in both conjugate spaces, and it is quantified by the variation of information quantities. It is shown that the composite quantities (the Shannon entropy sum and the Fisher information product in combined spaces and Fisher-Shannon complexity in individual space) give a more comprehensive description of the atomic structure information than single ones. The nodes of wave functions play a significant role in the changes of composite information quantities caused by plasmas. With the continuously increasing screening strength, all composite quantities in circular states increase monotonously, while in higher-lying excited states where nodal structures exist, they first decrease to a minimum and then increase rapidly before the bound state approaches the continuum limit. The minimum represents the most reduction of uncertainty properties of the atom in plasmas. The lower bounds for the uncertainty product of the system based on composite information quantities are discussed. Our research presents a comprehensive survey in the investigation of information-theoretic measures for simple atoms embedded in Debye model plasmas.

  4. The Counter-Intuitive Non-Informative Prior for the Bernoulli Family

    ERIC Educational Resources Information Center

    Zhu, Mu; Lu, Arthur Y.

    2004-01-01

    In Bayesian statistics, the choice of the prior distribution is often controversial. Different rules for selecting priors have been suggested in the literature, which, sometimes, produce priors that are difficult for the students to understand intuitively. In this article, we use a simple heuristic to illustrate to the students the rather…

  5. Meteorological variables associated with deep slab avalanches on persistent weak layers

    USGS Publications Warehouse

    Marienthal, Alex; Hendrikx, Jordy; Birkeland, Karl; Irvine, Kathryn M.

    2014-01-01

    Deep slab avalanches are a particularly challenging avalanche forecasting problem. These avalanches are typically difficult to trigger, yet when they are triggered they tend to propagate far and result in large and destructive avalanches. For this work we define deep slab avalanches as those that fail on persistent weak layers deeper than 0.9m (3 feet), and that occur after February 1st. We utilized a 44-year record of avalanche control and meteorological data from Bridger Bowl Ski Area to test the usefulness of meteorological variables for predicting deep slab avalanches. As in previous studies, we used data from the days preceding deep slab cycles, but we also considered meteorological metrics over the early months of the season. We utilized classification trees for our analyses. Our results showed warmer temperatures in the prior twenty-four hours and more loading over the seven days before days with deep slab avalanches on persistent weak layers. In line with previous research, extended periods of above freezing temperatures led to days with deep wet slab avalanches on persistent weak layers. Seasons with either dry or wet avalanches on deep persistent weak layers typically had drier early months, and often had some significant snow depth prior to those dry months. This paper provides insights for ski patrollers, guides, and avalanche forecasters who struggle to forecast deep slab avalanches on persistent weak layers late in the season.

  6. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  7. Spontaneous trait inference and spontaneous trait transference are both unaffected by prior evaluations of informants.

    PubMed

    Zengel, Bettina; Ambler, James K; McCarthy, Randy J; Skowronski, John J

    2017-01-01

    This article reports results from a study in which participants encountered either (a) previously known informants who were positive (e.g. Abraham Lincoln), neutral (e.g., Jay Leno), or negative (e.g., Adolf Hitler), or (b) previously unknown informants. The informants ostensibly described either a trait-implicative positive behavior, a trait-implicative negative behavior, or a neutral behavior. These descriptions were framed as either the behavior of the informant or the behavior of another person. Results yielded evidence of informant-trait linkages for both self-informants and for informants who described another person. These effects were not moderated by informant type, behavior valence, or the congruency or incongruency between the prior knowledge of the informant and the behavior valence. Results are discussed in terms of theories of Spontaneous Trait Inference and Spontaneous Trait Transference.

  8. The impact of prior knowledge from participant instructions in a mock crime P300 Concealed Information Test.

    PubMed

    Winograd, Michael R; Rosenfeld, J Peter

    2014-12-01

    In P300-Concealed Information Tests used with mock crime scenarios, the amount of detail revealed to a participant prior to the commission of the mock crime can have a serious impact on a study's validity. We predicted that exposure to crime details through instructions would bias detection rates toward enhanced sensitivity. In a 2 × 2 factorial design, participants were either informed (through mock crime instructions) or naïve as to the identity of a to-be-stolen item, and then either committed (guilty) or did not commit (innocent) the crime. Results showed that prior knowledge of the stolen item was sufficient to cause 69% of innocent-informed participants to be incorrectly classified as guilty. Further, we found a trend toward enhanced detection rate for guilty-informed participants over guilty-naïve participants. Results suggest that revealing details to participants through instructions biases detection rates in the P300-CIT toward enhanced sensitivity. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. The power prior: theory and applications.

    PubMed

    Ibrahim, Joseph G; Chen, Ming-Hui; Gwon, Yeongjin; Chen, Fang

    2015-12-10

    The power prior has been widely used in many applications covering a large number of disciplines. The power prior is intended to be an informative prior constructed from historical data. It has been used in clinical trials, genetics, health care, psychology, environmental health, engineering, economics, and business. It has also been applied for a wide variety of models and settings, both in the experimental design and analysis contexts. In this review article, we give an A-to-Z exposition of the power prior and its applications to date. We review its theoretical properties, variations in its formulation, statistical contexts for which it has been used, applications, and its advantages over other informative priors. We review models for which it has been used, including generalized linear models, survival models, and random effects models. Statistical areas where the power prior has been used include model selection, experimental design, hierarchical modeling, and conjugate priors. Frequentist properties of power priors in posterior inference are established, and a simulation study is conducted to further examine the empirical performance of the posterior estimates with power priors. Real data analyses are given illustrating the power prior as well as the use of the power prior in the Bayesian design of clinical trials. Copyright © 2015 John Wiley & Sons, Ltd.

  10. The Power Prior: Theory and Applications

    PubMed Central

    Ibrahim, Joseph G.; Chen, Ming-Hui; Gwon, Yeongjin; Chen, Fang

    2015-01-01

    The power prior has been widely used in many applications covering a large number of disciplines. The power prior is intended to be an informative prior constructed from historical data. It has been used in clinical trials, genetics, health care, psychology, environmental health, engineering, economics, and business. It has also been applied for a wide variety of models and settings, both in the experimental design and analysis contexts. In this review article, we give an A to Z exposition of the power prior and its applications to date. We review its theoretical properties, variations in its formulation, statistical contexts for which it has been used, applications, and its advantages over other informative priors. We review models for which it has been used, including generalized linear models, survival models, and random effects models. Statistical areas where the power prior has been used include model selection, experimental design, hierarchical modeling, and conjugate priors. Prequentist properties of power priors in posterior inference are established and a simulation study is conducted to further examine the empirical performance of the posterior estimates with power priors. Real data analyses are given illustrating the power prior as well as the use of the power prior in the Bayesian design of clinical trials. PMID:26346180

  11. Bayesian inference with historical data-based informative priors improves detection of differentially expressed genes

    PubMed Central

    Li, Ben; Sun, Zhaonan; He, Qing; Zhu, Yu; Qin, Zhaohui S.

    2016-01-01

    Motivation: Modern high-throughput biotechnologies such as microarray are capable of producing a massive amount of information for each sample. However, in a typical high-throughput experiment, only limited number of samples were assayed, thus the classical ‘large p, small n’ problem. On the other hand, rapid propagation of these high-throughput technologies has resulted in a substantial collection of data, often carried out on the same platform and using the same protocol. It is highly desirable to utilize the existing data when performing analysis and inference on a new dataset. Results: Utilizing existing data can be carried out in a straightforward fashion under the Bayesian framework in which the repository of historical data can be exploited to build informative priors and used in new data analysis. In this work, using microarray data, we investigate the feasibility and effectiveness of deriving informative priors from historical data and using them in the problem of detecting differentially expressed genes. Through simulation and real data analysis, we show that the proposed strategy significantly outperforms existing methods including the popular and state-of-the-art Bayesian hierarchical model-based approaches. Our work illustrates the feasibility and benefits of exploiting the increasingly available genomics big data in statistical inference and presents a promising practical strategy for dealing with the ‘large p, small n’ problem. Availability and implementation: Our method is implemented in R package IPBT, which is freely available from https://github.com/benliemory/IPBT. Contact: yuzhu@purdue.edu; zhaohui.qin@emory.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26519502

  12. External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Zhang, Lei; Zhang, David

    2018-06-01

    Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real noisy images.

  13. Weak-value amplification as an optimal metrological protocol

    NASA Astrophysics Data System (ADS)

    Alves, G. Bié; Escher, B. M.; de Matos Filho, R. L.; Zagury, N.; Davidovich, L.

    2015-06-01

    The implementation of weak-value amplification requires the pre- and postselection of states of a quantum system, followed by the observation of the response of the meter, which interacts weakly with the system. Data acquisition from the meter is conditioned to successful postselection events. Here we derive an optimal postselection procedure for estimating the coupling constant between system and meter and show that it leads both to weak-value amplification and to the saturation of the quantum Fisher information, under conditions fulfilled by all previously reported experiments on the amplification of weak signals. For most of the preselected states, full information on the coupling constant can be extracted from the meter data set alone, while for a small fraction of the space of preselected states, it must be obtained from the postselection statistics.

  14. Information gains from cosmological probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandis, S.; Seehars, S.; Refregier, A.

    In light of the growing number of cosmological observations, it is important to develop versatile tools to quantify the constraining power and consistency of cosmological probes. Originally motivated from information theory, we use the relative entropy to compute the information gained by Bayesian updates in units of bits. This measure quantifies both the improvement in precision and the 'surprise', i.e. the tension arising from shifts in central values. Our starting point is a WMAP9 prior which we update with observations of the distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak lensing as well as the 2015 Planck release.more » We consider the parameters of the flat ΛCDM concordance model and some of its extensions which include curvature and Dark Energy equation of state parameter w . We find that, relative to WMAP9 and within these model spaces, the probes that have provided the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) and SNe experiments (3.1 bits). The other cosmological probes, including weak lensing (1.7 bits) and (H{sub 0}) measures (1.7 bits), have contributed information but at a lower level. Furthermore, we do not find any significant surprise when updating the constraints of WMAP9 with any of the other experiments, meaning that they are consistent with WMAP9. However, when we choose Planck15 as the prior, we find that, accounting for the full multi-dimensionality of the parameter space, the weak lensing measurements of CFHTLenS produce a large surprise of 4.4 bits which is statistically significant at the 8 σ level. We discuss how the relative entropy provides a versatile and robust framework to compare cosmological probes in the context of current and future surveys.« less

  15. Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models

    ERIC Educational Resources Information Center

    Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent

    2015-01-01

    When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…

  16. Why Bother to Calibrate? Model Consistency and the Value of Prior Information

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal

    2015-04-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  17. Why Bother and Calibrate? Model Consistency and the Value of Prior Information.

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.

    2014-12-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  18. The Influence of Prior Knowledge on the Retrieval-Directed Function of Note Taking in Prior Knowledge Activation

    ERIC Educational Resources Information Center

    Wetzels, Sandra A. J.; Kester, Liesbeth; van Merrienboer, Jeroen J. G.; Broers, Nick J.

    2011-01-01

    Background: Prior knowledge activation facilitates learning. Note taking during prior knowledge activation (i.e., note taking directed at retrieving information from memory) might facilitate the activation process by enabling learners to build an external representation of their prior knowledge. However, taking notes might be less effective in…

  19. Neutrino masses and their ordering: global data, priors and models

    NASA Astrophysics Data System (ADS)

    Gariazzo, S.; Archidiacono, M.; de Salas, P. F.; Mena, O.; Ternes, C. A.; Tórtola, M.

    2018-03-01

    We present a full Bayesian analysis of the combination of current neutrino oscillation, neutrinoless double beta decay and Cosmic Microwave Background observations. Our major goal is to carefully investigate the possibility to single out one neutrino mass ordering, namely Normal Ordering or Inverted Ordering, with current data. Two possible parametrizations (three neutrino masses versus the lightest neutrino mass plus the two oscillation mass splittings) and priors (linear versus logarithmic) are exhaustively examined. We find that the preference for NO is only driven by neutrino oscillation data. Moreover, the values of the Bayes factor indicate that the evidence for NO is strong only when the scan is performed over the three neutrino masses with logarithmic priors; for every other combination of parameterization and prior, the preference for NO is only weak. As a by-product of our Bayesian analyses, we are able to (a) compare the Bayesian bounds on the neutrino mixing parameters to those obtained by means of frequentist approaches, finding a very good agreement; (b) determine that the lightest neutrino mass plus the two mass splittings parametrization, motivated by the physical observables, is strongly preferred over the three neutrino mass eigenstates scan and (c) find that logarithmic priors guarantee a weakly-to-moderately more efficient sampling of the parameter space. These results establish the optimal strategy to successfully explore the neutrino parameter space, based on the use of the oscillation mass splittings and a logarithmic prior on the lightest neutrino mass, when combining neutrino oscillation data with cosmology and neutrinoless double beta decay. We also show that the limits on the total neutrino mass ∑ mν can change dramatically when moving from one prior to the other. These results have profound implications for future studies on the neutrino mass ordering, as they crucially state the need for self-consistent analyses which explore the

  20. Experimentally Derived δ13C and δ15N Discrimination Factors for Gray Wolves and the Impact of Prior Information in Bayesian Mixing Models

    PubMed Central

    Bucci, Melanie E.; Callahan, Peggy; Koprowski, John L.; Polfus, Jean L.; Krausman, Paul R.

    2015-01-01

    Stable isotope analysis of diet has become a common tool in conservation research. However, the multiple sources of uncertainty inherent in this analysis framework involve consequences that have not been thoroughly addressed. Uncertainty arises from the choice of trophic discrimination factors, and for Bayesian stable isotope mixing models (SIMMs), the specification of prior information; the combined effect of these aspects has not been explicitly tested. We used a captive feeding study of gray wolves (Canis lupus) to determine the first experimentally-derived trophic discrimination factors of C and N for this large carnivore of broad conservation interest. Using the estimated diet in our controlled system and data from a published study on wild wolves and their prey in Montana, USA, we then investigated the simultaneous effect of discrimination factors and prior information on diet reconstruction with Bayesian SIMMs. Discrimination factors for gray wolves and their prey were 1.97‰ for δ13C and 3.04‰ for δ15N. Specifying wolf discrimination factors, as opposed to the commonly used red fox (Vulpes vulpes) factors, made little practical difference to estimates of wolf diet, but prior information had a strong effect on bias, precision, and accuracy of posterior estimates. Without specifying prior information in our Bayesian SIMM, it was not possible to produce SIMM posteriors statistically similar to the estimated diet in our controlled study or the diet of wild wolves. Our study demonstrates the critical effect of prior information on estimates of animal diets using Bayesian SIMMs, and suggests species-specific trophic discrimination factors are of secondary importance. When using stable isotope analysis to inform conservation decisions researchers should understand the limits of their data. It may be difficult to obtain useful information from SIMMs if informative priors are omitted and species-specific discrimination factors are unavailable. PMID:25803664

  1. A comment on priors for Bayesian occupancy models

    PubMed Central

    Gerber, Brian D.

    2018-01-01

    Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are “uninformative” or “vague”, such priors can easily be unintentionally highly informative. Here we report on how the specification of a “vague” normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts. PMID:29481554

  2. A comment on priors for Bayesian occupancy models.

    PubMed

    Northrup, Joseph M; Gerber, Brian D

    2018-01-01

    Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are "uninformative" or "vague", such priors can easily be unintentionally highly informative. Here we report on how the specification of a "vague" normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts.

  3. Cosmological information in Gaussianized weak lensing signals

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.; Kiessling, A.

    2011-11-01

    Gaussianizing the one-point distribution of the weak gravitational lensing convergence has recently been shown to increase the signal-to-noise ratio contained in two-point statistics. We investigate the information on cosmology that can be extracted from the transformed convergence fields. Employing Box-Cox transformations to determine optimal transformations to Gaussianity, we develop analytical models for the transformed power spectrum, including effects of noise and smoothing. We find that optimized Box-Cox transformations perform substantially better than an offset logarithmic transformation in Gaussianizing the convergence, but both yield very similar results for the signal-to-noise ratio. None of the transformations is capable of eliminating correlations of the power spectra between different angular frequencies, which we demonstrate to have a significant impact on the errors in cosmology. Analytic models of the Gaussianized power spectrum yield good fits to the simulations and produce unbiased parameter estimates in the majority of cases, where the exceptions can be traced back to the limitations in modelling the higher order correlations of the original convergence. In the ideal case, without galaxy shape noise, we find an increase in the cumulative signal-to-noise ratio by a factor of 2.6 for angular frequencies up to ℓ= 1500, and a decrease in the area of the confidence region in the Ωm-σ8 plane, measured in terms of q-values, by a factor of 4.4 for the best performing transformation. When adding a realistic level of shape noise, all transformations perform poorly with little decorrelation of angular frequencies, a maximum increase in signal-to-noise ratio of 34 per cent, and even slightly degraded errors on cosmological parameters. We argue that to find Gaussianizing transformations of practical use, it will be necessary to go beyond transformations of the one-point distribution of the convergence, extend the analysis deeper into the non

  4. Discovering mutated driver genes through a robust and sparse co-regularized matrix factorization framework with prior information from mRNA expression patterns and interaction network.

    PubMed

    Xi, Jianing; Wang, Minghui; Li, Ao

    2018-06-05

    Discovery of mutated driver genes is one of the primary objective for studying tumorigenesis. To discover some relatively low frequently mutated driver genes from somatic mutation data, many existing methods incorporate interaction network as prior information. However, the prior information of mRNA expression patterns are not exploited by these existing network-based methods, which is also proven to be highly informative of cancer progressions. To incorporate prior information from both interaction network and mRNA expressions, we propose a robust and sparse co-regularized nonnegative matrix factorization to discover driver genes from mutation data. Furthermore, our framework also conducts Frobenius norm regularization to overcome overfitting issue. Sparsity-inducing penalty is employed to obtain sparse scores in gene representations, of which the top scored genes are selected as driver candidates. Evaluation experiments by known benchmarking genes indicate that the performance of our method benefits from the two type of prior information. Our method also outperforms the existing network-based methods, and detect some driver genes that are not predicted by the competing methods. In summary, our proposed method can improve the performance of driver gene discovery by effectively incorporating prior information from interaction network and mRNA expression patterns into a robust and sparse co-regularized matrix factorization framework.

  5. Menarche: Prior Knowledge and Experience.

    ERIC Educational Resources Information Center

    Skandhan, K. P.; And Others

    1988-01-01

    Recorded menstruation information among 305 young women in India, assessing the differences between those who did and did not have knowledge of menstruation prior to menarche. Those with prior knowledge considered menarche to be a normal physiological function and had a higher rate of regularity, lower rate of dysmenorrhea, and earlier onset of…

  6. ℓ1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  7. Sunyaev-Zel'dovich Effect and X-ray Scaling Relations from Weak-Lensing Mass Calibration of 32 SPT Selected Galaxy Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietrich, J.P.; et al.

    Uncertainty in the mass-observable scaling relations is currently the limiting factor for galaxy cluster based cosmology. Weak gravitational lensing can provide a direct mass calibration and reduce the mass uncertainty. We present new ground-based weak lensing observations of 19 South Pole Telescope (SPT) selected clusters and combine them with previously reported space-based observations of 13 galaxy clusters to constrain the cluster mass scaling relations with the Sunyaev-Zel'dovich effect (SZE), the cluster gas massmore » $$M_\\mathrm{gas}$$, and $$Y_\\mathrm{X}$$, the product of $$M_\\mathrm{gas}$$ and X-ray temperature. We extend a previously used framework for the analysis of scaling relations and cosmological constraints obtained from SPT-selected clusters to make use of weak lensing information. We introduce a new approach to estimate the effective average redshift distribution of background galaxies and quantify a number of systematic errors affecting the weak lensing modelling. These errors include a calibration of the bias incurred by fitting a Navarro-Frenk-White profile to the reduced shear using $N$-body simulations. We blind the analysis to avoid confirmation bias. We are able to limit the systematic uncertainties to 6.4% in cluster mass (68% confidence). Our constraints on the mass-X-ray observable scaling relations parameters are consistent with those obtained by earlier studies, and our constraints for the mass-SZE scaling relation are consistent with the the simulation-based prior used in the most recent SPT-SZ cosmology analysis. We can now replace the external mass calibration priors used in previous SPT-SZ cosmology studies with a direct, internal calibration obtained on the same clusters.« less

  8. Optimal sampling with prior information of the image geometry in microfluidic MRI.

    PubMed

    Han, S H; Cho, H; Paulsen, J L

    2015-03-01

    Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. The pharmacokinetics of dexmedetomidine during long-term infusion in critically ill pediatric patients. A Bayesian approach with informative priors.

    PubMed

    Wiczling, Paweł; Bartkowska-Śniatkowska, Alicja; Szerkus, Oliwia; Siluk, Danuta; Rosada-Kurasińska, Jowita; Warzybok, Justyna; Borsuk, Agnieszka; Kaliszan, Roman; Grześkowiak, Edmund; Bienert, Agnieszka

    2016-06-01

    The purpose of this study was to assess the pharmacokinetics of dexmedetomidine in the ICU settings during the prolonged infusion and to compare it with the existing literature data using the Bayesian population modeling with literature-based informative priors. Thirty-eight patients were included in the analysis with concentration measurements obtained at two occasions: first from 0 to 24 h after infusion initiation and second from 0 to 8 h after infusion end. Data analysis was conducted using WinBUGS software. The prior information on dexmedetomidine pharmacokinetics was elicited from the literature study pooling results from a relatively large group of 95 children. A two compartment PK model, with allometrically scaled parameters, maturation of clearance and t-student residual distribution on a log-scale was used to describe the data. The incorporation of time-dependent (different between two occasions) PK parameters improved the model. It was observed that volume of distribution is 1.5-fold higher during the second occasion. There was also an evidence of increased (1.3-fold) clearance for the second occasion with posterior probability equal to 62 %. This work demonstrated the usefulness of Bayesian modeling with informative priors in analyzing pharmacokinetic data and comparing it with existing literature knowledge.

  10. Heuristics as Bayesian inference under extreme priors.

    PubMed

    Parpart, Paula; Jones, Matt; Love, Bradley C

    2018-05-01

    Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Bayesian inference with historical data-based informative priors improves detection of differentially expressed genes.

    PubMed

    Li, Ben; Sun, Zhaonan; He, Qing; Zhu, Yu; Qin, Zhaohui S

    2016-03-01

    Modern high-throughput biotechnologies such as microarray are capable of producing a massive amount of information for each sample. However, in a typical high-throughput experiment, only limited number of samples were assayed, thus the classical 'large p, small n' problem. On the other hand, rapid propagation of these high-throughput technologies has resulted in a substantial collection of data, often carried out on the same platform and using the same protocol. It is highly desirable to utilize the existing data when performing analysis and inference on a new dataset. Utilizing existing data can be carried out in a straightforward fashion under the Bayesian framework in which the repository of historical data can be exploited to build informative priors and used in new data analysis. In this work, using microarray data, we investigate the feasibility and effectiveness of deriving informative priors from historical data and using them in the problem of detecting differentially expressed genes. Through simulation and real data analysis, we show that the proposed strategy significantly outperforms existing methods including the popular and state-of-the-art Bayesian hierarchical model-based approaches. Our work illustrates the feasibility and benefits of exploiting the increasingly available genomics big data in statistical inference and presents a promising practical strategy for dealing with the 'large p, small n' problem. Our method is implemented in R package IPBT, which is freely available from https://github.com/benliemory/IPBT CONTACT: yuzhu@purdue.edu; zhaohui.qin@emory.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Great apes are sensitive to prior reliability of an informant in a gaze following task.

    PubMed

    Schmid, Benjamin; Karg, Katja; Perner, Josef; Tomasello, Michael

    2017-01-01

    Social animals frequently rely on information from other individuals. This can be costly in case the other individual is mistaken or even deceptive. Human infants below 4 years of age show proficiency in their reliance on differently reliable informants. They can infer the reliability of an informant from few interactions and use that assessment in later interactions with the same informant in a different context. To explore whether great apes share that ability, in our study we confronted great apes with a reliable or unreliable informant in an object choice task, to see whether that would in a subsequent task affect their gaze following behaviour in response to the same informant. In our study, prior reliability of the informant and habituation during the gaze following task affected both great apes' automatic gaze following response and their more deliberate response of gaze following behind barriers. As habituation is very context specific, it is unlikely that habituation in the reliability task affected the gaze following task. Rather it seems that apes employ a reliability tracking strategy that results in a general avoidance of additional information from an unreliable informant.

  13. Incorporation of stochastic engineering models as prior information in Bayesian medical device trials.

    PubMed

    Haddad, Tarek; Himes, Adam; Thompson, Laura; Irony, Telba; Nair, Rajesh

    2017-01-01

    Evaluation of medical devices via clinical trial is often a necessary step in the process of bringing a new product to market. In recent years, device manufacturers are increasingly using stochastic engineering models during the product development process. These models have the capability to simulate virtual patient outcomes. This article presents a novel method based on the power prior for augmenting a clinical trial using virtual patient data. To properly inform clinical evaluation, the virtual patient model must simulate the clinical outcome of interest, incorporating patient variability, as well as the uncertainty in the engineering model and in its input parameters. The number of virtual patients is controlled by a discount function which uses the similarity between modeled and observed data. This method is illustrated by a case study of cardiac lead fracture. Different discount functions are used to cover a wide range of scenarios in which the type I error rates and power vary for the same number of enrolled patients. Incorporation of engineering models as prior knowledge in a Bayesian clinical trial design can provide benefits of decreased sample size and trial length while still controlling type I error rate and power.

  14. Exploring the Transformative Potential of Recognition of Prior Informal Learning for Learners: A Case Study in Scotland

    ERIC Educational Resources Information Center

    Brown, Julie

    2017-01-01

    This article presents an overview of the findings of a recently completed study exploring the potentially transformative impact upon learners of recognition of prior informal learning (RPL). The specific transformative dimension being reported is learner identity. In addition to providing a starting point for an evidence base within Scotland, the…

  15. Screening the use of informed consent forms prior to procedures involving operative dentistry: ethical aspects

    PubMed Central

    Graziele Rodrigues, Livia; De Souza, João Batista; De Torres, Erica Miranda; Ferreira Silva, Rhonan

    2017-01-01

    Background. The present study aimed to screen the knowledge and attitudes of dentists toward the use of informed consent forms prior to procedures involving operative dentistry. Methods. A research tool containing questions (questionnaire) regarding the use of informed consent forms was developed. The questionnaire consisted of seven questions structured to screen the current practice in operative dentistry towards the use of informed consent forms. Results. The questionnaires were distributed among 731 dentists, of which 179 returned them with answers. Sixty-seven dentists reported not using informed consent forms. The main reasons for not using informed consent forms were: having a complete dental record signed by the patient (67.2%) and having a good relation with patients (43.6%). The dentists who reported using informed consent forms revealed that they obtained them from other dentists and made their own modifications (35.9%). Few dentists revealed contacting lawyers (1.7%) and experts in legal dentistry (0.9%) for the development of their informed consent forms. Conclusion. A high number of dentists working in the field of operative dentistry behave according to the ethical standards in the clinical practice, becoming unprotected against ethical and legal actions. PMID:28413600

  16. Weak measurements and quantum weak values for NOON states

    NASA Astrophysics Data System (ADS)

    Rosales-Zárate, L.; Opanchuk, B.; Reid, M. D.

    2018-03-01

    Quantum weak values arise when the mean outcome of a weak measurement made on certain preselected and postselected quantum systems goes beyond the eigenvalue range for a quantum observable. Here, we propose how to determine quantum weak values for superpositions of states with a macroscopically or mesoscopically distinct mode number, that might be realized as two-mode Bose-Einstein condensate or photonic NOON states. Specifically, we give a model for a weak measurement of the Schwinger spin of a two-mode NOON state, for arbitrary N . The weak measurement arises from a nondestructive measurement of the two-mode occupation number difference, which for atomic NOON states might be realized via phase contrast imaging and the ac Stark effect using an optical meter prepared in a coherent state. The meter-system coupling results in an entangled cat-state. By subsequently evolving the system under the action of a nonlinear Josephson Hamiltonian, we show how postselection leads to quantum weak values, for arbitrary N . Since the weak measurement can be shown to be minimally invasive, the weak values provide a useful strategy for a Leggett-Garg test of N -scopic realism.

  17. Decoherence suppression of tripartite entanglement in non-Markovian environments by using weak measurements

    NASA Astrophysics Data System (ADS)

    Ding, Zhi-yong; He, Juan; Ye, Liu

    2017-02-01

    A feasible scheme for protecting the Greenberger-Horne-Zeilinger (GHZ) entanglement state in non-Markovian environments is proposed. It consists of prior weak measurement on each qubit before the interaction with decoherence environments followed by post quantum measurement reversals. It is shown that both the fidelity and concurrence of the GHZ state can be effectively improved. Meanwhile, we also verified that our scenario can enhance tripartite nonlocality remarkably. In addition, the result indicates that the larger the weak measurement strength, the better the effectiveness of the scheme with the lower success probability.

  18. Association of eHealth literacy with cancer information seeking and prior experience with cancer screening.

    PubMed

    Park, Hyejin; Moon, Mikyung; Baeg, Jung Hoon

    2014-09-01

    Cancer is a critical disease with a high mortality rate in the US. Although useful information exists on the Internet, many people experience difficulty finding information about cancer prevention because they have limited eHealth literacy. This study aimed to identify relationships between the level of eHealth literacy and cancer information seeking experience or prior experience with cancer screening tests. A total of 108 adults participated in this study through questionnaires. Data covering demographics, eHealth literacy, cancer information seeking experience, educational needs for cancer information searching, and previous cancer screening tests were obtained. Study findings show that the level of eHealth literacy influences cancer information seeking. Individuals with low eHealth literacy are likely to be less confident about finding cancer information. In addition, people who have a low level of eHealth literacy need more education about seeking information than do those with a higher level of eHealth literacy. However, there is no significant relationship between eHealth literacy and cancer screening tests. More people today are using the Internet for access to information to maintain good health. It is therefore critical to educate those with low eHealth literacy so they can better self-manage their health.

  19. Weak Gravitational Lensing

    NASA Astrophysics Data System (ADS)

    Pires, Sandrine; Starck, Jean-Luc; Leonard, Adrienne; Réfrégier, Alexandre

    2012-03-01

    , which affects the evolution of structures. Gravitational lensing is the process by which light from distant galaxies is bent by the gravity of intervening mass in the Universe as it travels toward us. This bending causes the images of background galaxies to appear slightly distorted, and can be used to extract important cosmological information. In the beginning of the twentieth century, A. Einstein predicted that massive bodies could be seen as gravitational lenses that bend the path of light rays by creating a local curvature in space time. One of the first confirmations of Einstein's new theory was the observation during the 1919 solar eclipse of the deflection of light from distant stars by the sun. Since then, a wide range of lensing phenomena have been detected. The gravitational deflection of light by mass concentrations along light paths produces magnification, multiplication, and distortion of images. These lensing effects are illustrated by Figure 14.2, which shows one of the strongest lenses observed: Abell 2218, a very massive and distant cluster of galaxies in the constellation Draco. The observed gravitational arcs are actually the magnified and strongly distorted images of galaxies that are about 10 times more distant than the cluster itself. These strong gravitational lensing effects are very impressive but they are very rare. Far more prevalent are weak gravitational lensing effects, which we consider in this chapter, and in which the induced distortion in galaxy images is much weaker. These gravitational lensing effects are now widely used, but the amplitude of the weak lensing signal is so weak that its detection relies on the accuracy of the techniques used to analyze the data. Future weak lensing surveys are already planned in order to cover a large fraction of the sky with high accuracy, such as Euclid [68]. However, improving accuracy also places greater demands on the methods used to extract the available information.

  20. Techniques for detection and localization of weak hippocampal and medial frontal sources using beamformers in MEG.

    PubMed

    Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A

    2012-07-01

    Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate

  1. Weak-interaction rates in stellar conditions

    NASA Astrophysics Data System (ADS)

    Sarriguren, Pedro

    2018-05-01

    Weak-interaction rates, including β-decay and electron captures, are studied in several mass regions at various densities and temperatures of astrophysical interest. In particular, we study odd-A nuclei in the pf-shell region, which are involved in presupernova formations. Weak rates are relevant to understand the late stages of the stellar evolution, as well as the nucleosynthesis of heavy nuclei. The nuclear structure involved in the weak processes is studied within a quasiparticle proton-neutron random-phase approximation with residual interactions in both particle-hole and particle-particle channels on top of a deformed Skyrme Hartree-Fock mean field with pairing correlations. First, the energy distributions of the Gamow-Teller strength are discussed and compared with the available experimental information, measured under terrestrial conditions from charge-exchange reactions. Then, the sensitivity of the weak-interaction rates to both astrophysical densities and temperatures is studied. Special attention is paid to the relative contribution to these rates of thermally populated excited states in the decaying nucleus and to the electron captures from the degenerate electron plasma.

  2. Commensurate Priors for Incorporating Historical Information in Clinical Trials Using General and Generalized Linear Models

    PubMed Central

    Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.

    2014-01-01

    Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786

  3. Experimentally derived δ¹³C and δ¹⁵N discrimination factors for gray wolves and the impact of prior information in Bayesian mixing models.

    PubMed

    Derbridge, Jonathan J; Merkle, Jerod A; Bucci, Melanie E; Callahan, Peggy; Koprowski, John L; Polfus, Jean L; Krausman, Paul R

    2015-01-01

    Stable isotope analysis of diet has become a common tool in conservation research. However, the multiple sources of uncertainty inherent in this analysis framework involve consequences that have not been thoroughly addressed. Uncertainty arises from the choice of trophic discrimination factors, and for Bayesian stable isotope mixing models (SIMMs), the specification of prior information; the combined effect of these aspects has not been explicitly tested. We used a captive feeding study of gray wolves (Canis lupus) to determine the first experimentally-derived trophic discrimination factors of C and N for this large carnivore of broad conservation interest. Using the estimated diet in our controlled system and data from a published study on wild wolves and their prey in Montana, USA, we then investigated the simultaneous effect of discrimination factors and prior information on diet reconstruction with Bayesian SIMMs. Discrimination factors for gray wolves and their prey were 1.97‰ for δ13C and 3.04‰ for δ15N. Specifying wolf discrimination factors, as opposed to the commonly used red fox (Vulpes vulpes) factors, made little practical difference to estimates of wolf diet, but prior information had a strong effect on bias, precision, and accuracy of posterior estimates. Without specifying prior information in our Bayesian SIMM, it was not possible to produce SIMM posteriors statistically similar to the estimated diet in our controlled study or the diet of wild wolves. Our study demonstrates the critical effect of prior information on estimates of animal diets using Bayesian SIMMs, and suggests species-specific trophic discrimination factors are of secondary importance. When using stable isotope analysis to inform conservation decisions researchers should understand the limits of their data. It may be difficult to obtain useful information from SIMMs if informative priors are omitted and species-specific discrimination factors are unavailable.

  4. Towards a better preclinical model of PTSD: characterizing animals with weak extinction, maladaptive stress responses and low plasma corticosterone.

    PubMed

    Reznikov, Roman; Diwan, Mustansir; Nobrega, José N; Hamani, Clement

    2015-02-01

    Most of the available preclinical models of PTSD have focused on isolated behavioural aspects and have not considered individual variations in response to stress. We employed behavioural criteria to identify and characterize a subpopulation of rats that present several features analogous to PTSD-like states after exposure to classical fear conditioning. Outbred Sprague-Dawley rats were segregated into weak- and strong-extinction groups on the basis of behavioural scores during extinction of conditioned fear responses. Animals were subsequently tested for anxiety-like behaviour in the open-field test (OFT), novelty suppressed feeding (NSF) and elevated plus maze (EPM). Baseline plasma corticosterone was measured prior to any behavioural manipulation. In a second experiment, rats underwent OFT, NSF and EPM prior to being subjected to fear conditioning to ascertain whether or not pre-stress levels of anxiety-like behaviours could predict extinction scores. We found that 25% of rats exhibit low extinction rates of conditioned fear, a feature that was associated with increased anxiety-like behaviour across multiple tests in comparison to rats showing strong extinction. In addition, weak-extinction animals showed low levels of corticosterone prior to fear conditioning, a variable that seemed to predict extinction recall scores. In a separate experiment, anxiety measures taken prior to fear conditioning were not predictive of a weak-extinction phenotype, suggesting that weak-extinction animals do not show detectable traits of anxiety in the absence of a stressful experience. These findings suggest that extinction impairment may be used to identify stress-vulnerable rats, thus providing a useful model for elucidating mechanisms and investigating potential treatments for PTSD. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Metal Artifact Reduction in X-ray Computed Tomography Using Computer-Aided Design Data of Implants as Prior Information.

    PubMed

    Ruth, Veikko; Kolditz, Daniel; Steiding, Christian; Kalender, Willi A

    2017-06-01

    The performance of metal artifact reduction (MAR) methods in x-ray computed tomography (CT) suffers from incorrect identification of metallic implants in the artifact-affected volumetric images. The aim of this study was to investigate potential improvements of state-of-the-art MAR methods by using prior information on geometry and material of the implant. The influence of a novel prior knowledge-based segmentation (PS) compared with threshold-based segmentation (TS) on 2 MAR methods (linear interpolation [LI] and normalized-MAR [NORMAR]) was investigated. The segmentation is the initial step of both MAR methods. Prior knowledge-based segmentation uses 3-dimensional registered computer-aided design (CAD) data as prior knowledge to estimate the correct position and orientation of the metallic objects. Threshold-based segmentation uses an adaptive threshold to identify metal. Subsequently, for LI and NORMAR, the selected voxels are projected into the raw data domain to mark metal areas. Attenuation values in these areas are replaced by different interpolation schemes followed by a second reconstruction. Finally, the previously selected metal voxels are replaced by the metal voxels determined by PS or TS in the initial reconstruction. First, we investigated in an elaborate phantom study if the knowledge of the exact implant shape extracted from the CAD data provided by the manufacturer of the implant can improve the MAR result. Second, the leg of a human cadaver was scanned using a clinical CT system before and after the implantation of an artificial knee joint. The results were compared regarding segmentation accuracy, CT number accuracy, and the restoration of distorted structures. The use of PS improved the efficacy of LI and NORMAR compared with TS. Artifacts caused by insufficient segmentation were reduced, and additional information was made available within the projection data. The estimation of the implant shape was more exact and not dependent on a threshold

  6. Identification of subsurface structures using electromagnetic data and shape priors

    NASA Astrophysics Data System (ADS)

    Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  7. The influence of prior knowledge on the retrieval-directed function of note taking in prior knowledge activation.

    PubMed

    Wetzels, Sandra A J; Kester, Liesbeth; van Merriënboer, Jeroen J G; Broers, Nick J

    2011-06-01

    Prior knowledge activation facilitates learning. Note taking during prior knowledge activation (i.e., note taking directed at retrieving information from memory) might facilitate the activation process by enabling learners to build an external representation of their prior knowledge. However, taking notes might be less effective in supporting prior knowledge activation if available prior knowledge is limited. This study investigates the effects of the retrieval-directed function of note taking depending on learners' level of prior knowledge. It is hypothesized that the effectiveness of note taking is influenced by the amount of prior knowledge learners already possess. Sixty-one high school students participated in this study. A prior knowledge test was used to ascertain differences in level of prior knowledge and assign participants to a low or a high prior knowledge group. A 2×2 factorial design was used to investigate the effects of note taking during prior knowledge activation (yes, no) depending on learners' level of prior knowledge (low, high) on mental effort, performance, and mental efficiency. Note taking during prior knowledge activation lowered mental effort and increased mental efficiency for high prior knowledge learners. For low prior knowledge learners, note taking had the opposite effect on mental effort and mental efficiency. The effects of the retrieval-directed function of note taking are influenced by learners' level of prior knowledge. Learners with high prior knowledge benefit from taking notes while activating prior knowledge, whereas note taking has no beneficial effects for learners with limited prior knowledge. ©2010 The British Psychological Society.

  8. Entanglement-Assisted Weak Value Amplification

    NASA Astrophysics Data System (ADS)

    Pang, Shengshi; Dressel, Justin; Brun, Todd A.

    2014-07-01

    Large weak values have been used to amplify the sensitivity of a linear response signal for detecting changes in a small parameter, which has also enabled a simple method for precise parameter estimation. However, producing a large weak value requires a low postselection probability for an ancilla degree of freedom, which limits the utility of the technique. We propose an improvement to this method that uses entanglement to increase the efficiency. We show that by entangling and postselecting n ancillas, the postselection probability can be increased by a factor of n while keeping the weak value fixed (compared to n uncorrelated attempts with one ancilla), which is the optimal scaling with n that is expected from quantum metrology. Furthermore, we show the surprising result that the quantum Fisher information about the detected parameter can be almost entirely preserved in the postselected state, which allows the sensitive estimation to approximately saturate the relevant quantum Cramér-Rao bound. To illustrate this protocol we provide simple quantum circuits that can be implemented using current experimental realizations of three entangled qubits.

  9. SU-E-J-71: Spatially Preserving Prior Knowledge-Based Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H; Xing, L

    2015-06-15

    Purpose: Prior knowledge-based treatment planning is impeded by the use of a single dose volume histogram (DVH) curve. Critical spatial information is lost from collapsing the dose distribution into a histogram. Even similar patients possess geometric variations that becomes inaccessible in the form of a single DVH. We propose a simple prior knowledge-based planning scheme that extracts features from prior dose distribution while still preserving the spatial information. Methods: A prior patient plan is not used as a mere starting point for a new patient but rather stopping criteria are constructed. Each structure from the prior patient is partitioned intomore » multiple shells. For instance, the PTV is partitioned into an inner, middle, and outer shell. Prior dose statistics are then extracted for each shell and translated into the appropriate Dmin and Dmax parameters for the new patient. Results: The partitioned dose information from a prior case has been applied onto 14 2-D prostate cases. Using prior case yielded final DVHs that was comparable to manual planning, even though the DVH for the prior case was different from the DVH for the 14 cases. Solely using a single DVH for the entire organ was also performed for comparison but showed a much poorer performance. Different ways of translating the prior dose statistics into parameters for the new patient was also tested. Conclusion: Prior knowledge-based treatment planning need to salvage the spatial information without transforming the patients on a voxel to voxel basis. An efficient balance between the anatomy and dose domain is gained through partitioning the organs into multiple shells. The use of prior knowledge not only serves as a starting point for a new case but the information extracted from the partitioned shells are also translated into stopping criteria for the optimization problem at hand.« less

  10. Initial information prior to movement onset influences kinematics of upward arm pointing movements

    PubMed Central

    Pozzo, Thierry; White, Olivier

    2016-01-01

    To elaborate a motor plan and perform online control in the gravity field, the brain relies on priors and multisensory integration of information. In particular, afferent and efferent inputs related to the initial state are thought to convey sensorimotor information to plan the upcoming action. Yet it is still unclear to what extent these cues impact motor planning. Here we examined the role of initial information on the planning and execution of arm movements. Participants performed upward arm movements around the shoulder at three speeds and in two arm conditions. In the first condition, the arm was outstretched horizontally and required a significant muscular command to compensate for the gravitational shoulder torque before movement onset. In contrast, in the second condition the arm was passively maintained in the same position with a cushioned support and did not require any muscle contraction before movement execution. We quantified differences in motor performance by comparing shoulder velocity profiles. Previous studies showed that asymmetric velocity profiles reflect an optimal integration of the effects of gravity on upward movements. Consistent with this, we found decreased acceleration durations in both arm conditions. However, early differences in kinematic asymmetries and EMG patterns between the two conditions signaled a change of the motor plan. This different behavior carried on through trials when the arm was at rest before movement onset and may reveal a distinct motor strategy chosen in the context of uncertainty. Altogether, we suggest that the information available online must be complemented by accurate initial information. PMID:27486106

  11. Initial information prior to movement onset influences kinematics of upward arm pointing movements.

    PubMed

    Rousseau, Célia; Papaxanthis, Charalambos; Gaveau, Jérémie; Pozzo, Thierry; White, Olivier

    2016-10-01

    To elaborate a motor plan and perform online control in the gravity field, the brain relies on priors and multisensory integration of information. In particular, afferent and efferent inputs related to the initial state are thought to convey sensorimotor information to plan the upcoming action. Yet it is still unclear to what extent these cues impact motor planning. Here we examined the role of initial information on the planning and execution of arm movements. Participants performed upward arm movements around the shoulder at three speeds and in two arm conditions. In the first condition, the arm was outstretched horizontally and required a significant muscular command to compensate for the gravitational shoulder torque before movement onset. In contrast, in the second condition the arm was passively maintained in the same position with a cushioned support and did not require any muscle contraction before movement execution. We quantified differences in motor performance by comparing shoulder velocity profiles. Previous studies showed that asymmetric velocity profiles reflect an optimal integration of the effects of gravity on upward movements. Consistent with this, we found decreased acceleration durations in both arm conditions. However, early differences in kinematic asymmetries and EMG patterns between the two conditions signaled a change of the motor plan. This different behavior carried on through trials when the arm was at rest before movement onset and may reveal a distinct motor strategy chosen in the context of uncertainty. Altogether, we suggest that the information available online must be complemented by accurate initial information. Copyright © 2016 the American Physiological Society.

  12. Investigating the impact of spatial priors on the performance of model-based IVUS elastography

    PubMed Central

    Richards, M S; Doyley, M M

    2012-01-01

    This paper describes methods that provide pre-requisite information for computing circumferential stress in modulus elastograms recovered from vascular tissue—information that could help cardiologists detect life-threatening plaques and predict their propensity to rupture. The modulus recovery process is an ill-posed problem; therefore additional information is needed to provide useful elastograms. In this work, prior geometrical information was used to impose hard or soft constraints on the reconstruction process. We conducted simulation and phantom studies to evaluate and compare modulus elastograms computed with soft and hard constraints versus those computed without any prior information. The results revealed that (1) the contrast-to-noise ratio of modulus elastograms achieved using the soft prior and hard prior reconstruction methods exceeded those computed without any prior information; (2) the soft prior and hard prior reconstruction methods could tolerate up to 8 % measurement noise; and (3) the performance of soft and hard prior modulus elastogram degraded when incomplete spatial priors were employed. This work demonstrates that including spatial priors in the reconstruction process should improve the performance of model-based elastography, and the soft prior approach should enhance the robustness of the reconstruction process to errors in the geometrical information. PMID:22037648

  13. Nudging toward Inquiry: Awakening and Building upon Prior Knowledge

    ERIC Educational Resources Information Center

    Fontichiaro, Kristin, Comp.

    2010-01-01

    "Prior knowledge" (sometimes called schema or background knowledge) is information one already knows that helps him/her make sense of new information. New learning builds on existing prior knowledge. In traditional reporting-style research projects, students bypass this crucial step and plow right into answer-finding. It's no wonder that many…

  14. Enhancing robustness of multiparty quantum correlations using weak measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Uttam, E-mail: uttamsingh@hri.res.in; Mishra, Utkarsh, E-mail: utkarsh@hri.res.in; Dhar, Himadri Shekhar, E-mail: dhar.himadri@gmail.com

    Multipartite quantum correlations are important resources for the development of quantum information and computation protocols. However, the resourcefulness of multipartite quantum correlations in practical settings is limited by its fragility under decoherence due to environmental interactions. Though there exist protocols to protect bipartite entanglement under decoherence, the implementation of such protocols for multipartite quantum correlations has not been sufficiently explored. Here, we study the effect of local amplitude damping channel on the generalized Greenberger–Horne–Zeilinger state, and use a protocol of optimal reversal quantum weak measurement to protect the multipartite quantum correlations. We observe that the weak measurement reversal protocol enhancesmore » the robustness of multipartite quantum correlations. Further it increases the critical damping value that corresponds to entanglement sudden death. To emphasize the efficacy of the technique in protection of multipartite quantum correlation, we investigate two proximately related quantum communication tasks, namely, quantum teleportation in a one sender, many receivers setting and multiparty quantum information splitting, through a local amplitude damping channel. We observe an increase in the average fidelity of both the quantum communication tasks under the weak measurement reversal protocol. The method may prove beneficial, for combating external interactions, in other quantum information tasks using multipartite resources. - Highlights: • Extension of weak measurement reversal scheme to protect multiparty quantum correlations. • Protection of multiparty quantum correlation under local amplitude damping noise. • Enhanced fidelity of quantum teleportation in one sender and many receivers setting. • Enhanced fidelity of quantum information splitting protocol.« less

  15. Hubble confirms cosmic acceleration with weak lensing

    NASA Image and Video Library

    2017-12-08

    NASA/ESA Hubble Release Date: March 25, 2010 This image shows a smoothed reconstruction of the total (mostly dark) matter distribution in the COSMOS field, created from data taken by the NASA/ESA Hubble Space Telescope and ground-based telescopes. It was inferred from the weak gravitational lensing distortions that are imprinted onto the shapes of background galaxies. The colour coding indicates the distance of the foreground mass concentrations as gathered from the weak lensing effect. Structures shown in white, cyan, and green are typically closer to us than those indicated in orange and red. To improve the resolution of the map, data from galaxies both with and without redshift information were used. The new study presents the most comprehensive analysis of data from the COSMOS survey. The researchers have, for the first time ever, used Hubble and the natural "weak lenses" in space to characterise the accelerated expansion of the Universe. Credit: NASA, ESA, P. Simon (University of Bonn) and T. Schrabback (Leiden Observatory) To learn more abou this image go to: www.spacetelescope.org/news/html/heic1005.html For more information about Goddard Space Flight Center go here: www.nasa.gov/centers/goddard/home/index.html

  16. Weak values and weak coupling maximizing the output of weak measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Lorenzo, Antonio, E-mail: dilorenzo.antonio@gmail.com

    2014-06-15

    In a weak measurement, the average output 〈o〉 of a probe that measures an observable A{sup -hat} of a quantum system undergoing both a preparation in a state ρ{sub i} and a postselection in a state E{sub f} is, to a good approximation, a function of the weak value A{sub w}=Tr[E{sub f}A{sup -hat} ρ{sub i}]/Tr[E{sub f}ρ{sub i}], a complex number. For a fixed coupling λ, when the overlap Tr[E{sub f}ρ{sub i}] is very small, A{sub w} diverges, but 〈o〉 stays finite, often tending to zero for symmetry reasons. This paper answers the questions: what is the weak value that maximizesmore » the output for a fixed coupling? What is the coupling that maximizes the output for a fixed weak value? We derive equations for the optimal values of A{sub w} and λ, and provide the solutions. The results are independent of the dimensionality of the system, and they apply to a probe having a Hilbert space of arbitrary dimension. Using the Schrödinger–Robertson uncertainty relation, we demonstrate that, in an important case, the amplification 〈o〉 cannot exceed the initial uncertainty σ{sub o} in the observable o{sup -hat}, we provide an upper limit for the more general case, and a strategy to obtain 〈o〉≫σ{sub o}. - Highlights: •We have provided a general framework to find the extremal values of a weak measurement. •We have derived the location of the extremal values in terms of preparation and postselection. •We have devised a maximization strategy going beyond the limit of the Schrödinger–Robertson relation.« less

  17. Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability

    NASA Astrophysics Data System (ADS)

    Kar, Soummya; Moura, José M. F.

    2011-04-01

    The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.

  18. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Y; Wu, P; Mao, T

    2016-06-15

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of

  19. When generating answers benefits arithmetic skill: the importance of prior knowledge.

    PubMed

    Rittle-Johnson, Bethany; Kmicikewycz, Alexander Oleksij

    2008-09-01

    People remember information better if they generate the information while studying rather than read the information. However, prior research has not investigated whether this generation effect extends to related but unstudied items and has not been conducted in classroom settings. We compared third graders' success on studied and unstudied multiplication problems after they spent a class period generating answers to problems or reading the answers from a calculator. The effect of condition interacted with prior knowledge. Students with low prior knowledge had higher accuracy in the generate condition, but as prior knowledge increased, the advantage of generating answers decreased. The benefits of generating answers may extend to unstudied items and to classroom settings, but only for learners with low prior knowledge.

  20. Constraining f (R ) Gravity Theory Using Weak Lensing Peak Statistics from the Canada-France-Hawii-Telescope Lensing Survey

    NASA Astrophysics Data System (ADS)

    Liu, Xiangkun; Li, Baojiu; Zhao, Gong-Bo; Chiu, Mu-Chen; Fang, Wei; Pan, Chuzhong; Wang, Qiao; Du, Wei; Yuan, Shuo; Fu, Liping; Fan, Zuhui

    2016-07-01

    In this Letter, we report the observational constraints on the Hu-Sawicki f (R ) theory derived from weak lensing peak abundances, which are closely related to the mass function of massive halos. In comparison with studies using optical or x-ray clusters of galaxies, weak lensing peak analyses have the advantages of not relying on mass-baryonic observable calibrations. With observations from the Canada-France-Hawaii-Telescope Lensing Survey, our peak analyses give rise to a tight constraint on the model parameter |fR 0| for n =1 . The 95% C.L. is log10|fR 0|<-4.82 given WMAP9 priors on (Ωm , As ). With Planck15 priors, the corresponding result is log10|fR 0|<-5.16 .

  1. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  2. Weak value controversy

    NASA Astrophysics Data System (ADS)

    Vaidman, L.

    2017-10-01

    Recent controversy regarding the meaning and usefulness of weak values is reviewed. It is argued that in spite of recent statistical arguments by Ferrie and Combes, experiments with anomalous weak values provide useful amplification techniques for precision measurements of small effects in many realistic situations. The statistical nature of weak values is questioned. Although measuring weak values requires an ensemble, it is argued that the weak value, similarly to an eigenvalue, is a property of a single pre- and post-selected quantum system. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  3. 22 CFR 129.8 - Prior notification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Prior notification. 129.8 Section 129.8 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS REGISTRATION AND LICENSING OF...,000, except for sharing of basic marketing information (e.g., information that does not include...

  4. Predictive top-down integration of prior knowledge during speech perception.

    PubMed

    Sohoglu, Ediz; Peelle, Jonathan E; Carlyon, Robert P; Davis, Matthew H

    2012-06-20

    A striking feature of human perception is that our subjective experience depends not only on sensory information from the environment but also on our prior knowledge or expectations. The precise mechanisms by which sensory information and prior knowledge are integrated remain unclear, with longstanding disagreement concerning whether integration is strictly feedforward or whether higher-level knowledge influences sensory processing through feedback connections. Here we used concurrent EEG and MEG recordings to determine how sensory information and prior knowledge are integrated in the brain during speech perception. We manipulated listeners' prior knowledge of speech content by presenting matching, mismatching, or neutral written text before a degraded (noise-vocoded) spoken word. When speech conformed to prior knowledge, subjective perceptual clarity was enhanced. This enhancement in clarity was associated with a spatiotemporal profile of brain activity uniquely consistent with a feedback process: activity in the inferior frontal gyrus was modulated by prior knowledge before activity in lower-level sensory regions of the superior temporal gyrus. In parallel, we parametrically varied the level of speech degradation, and therefore the amount of sensory detail, so that changes in neural responses attributable to sensory information and prior knowledge could be directly compared. Although sensory detail and prior knowledge both enhanced speech clarity, they had an opposite influence on the evoked response in the superior temporal gyrus. We argue that these data are best explained within the framework of predictive coding in which sensory activity is compared with top-down predictions and only unexplained activity propagated through the cortical hierarchy.

  5. Neural Mechanisms for Integrating Prior Knowledge and Likelihood in Value-Based Probabilistic Inference

    PubMed Central

    Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.

    2015-01-01

    In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152

  6. The impact of using informative priors in a Bayesian cost-effectiveness analysis: an application of endovascular versus open surgical repair for abdominal aortic aneurysms in high-risk patients.

    PubMed

    McCarron, C Elizabeth; Pullenayegum, Eleanor M; Thabane, Lehana; Goeree, Ron; Tarride, Jean-Eric

    2013-04-01

    Bayesian methods have been proposed as a way of synthesizing all available evidence to inform decision making. However, few practical applications of the use of Bayesian methods for combining patient-level data (i.e., trial) with additional evidence (e.g., literature) exist in the cost-effectiveness literature. The objective of this study was to compare a Bayesian cost-effectiveness analysis using informative priors to a standard non-Bayesian nonparametric method to assess the impact of incorporating additional information into a cost-effectiveness analysis. Patient-level data from a previously published nonrandomized study were analyzed using traditional nonparametric bootstrap techniques and bivariate normal Bayesian models with vague and informative priors. Two different types of informative priors were considered to reflect different valuations of the additional evidence relative to the patient-level data (i.e., "face value" and "skeptical"). The impact of using different distributions and valuations was assessed in a sensitivity analysis. Models were compared in terms of incremental net monetary benefit (INMB) and cost-effectiveness acceptability frontiers (CEAFs). The bootstrapping and Bayesian analyses using vague priors provided similar results. The most pronounced impact of incorporating the informative priors was the increase in estimated life years in the control arm relative to what was observed in the patient-level data alone. Consequently, the incremental difference in life years originally observed in the patient-level data was reduced, and the INMB and CEAF changed accordingly. The results of this study demonstrate the potential impact and importance of incorporating additional information into an analysis of patient-level data, suggesting this could alter decisions as to whether a treatment should be adopted and whether more information should be acquired.

  7. Positive effect on patient experience of video information given prior to cardiovascular magnetic resonance imaging: A clinical trial.

    PubMed

    Ahlander, Britt-Marie; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth

    2018-03-01

    increased by adding video information prior the exam, which is important in relation to perceived quality in nursing. No effect was seen on motion artefacts. Video information prior to examinations can be an easy and time effective method to help patients cooperate in imaging procedures. © 2017 John Wiley & Sons Ltd.

  8. A novel microfluidics-based method for probing weak protein-protein interactions.

    PubMed

    Tan, Darren Cherng-wen; Wijaya, I Putu Mahendra; Andreasson-Ochsner, Mirjam; Vasina, Elena Nikolaevna; Nallani, Madhavan; Hunziker, Walter; Sinner, Eva-Kathrin

    2012-08-07

    We report the use of a novel microfluidics-based method to detect weak protein-protein interactions between membrane proteins. The tight junction protein, claudin-2, synthesised in vitro using a cell-free expression system in the presence of polymer vesicles as membrane scaffolds, was used as a model membrane protein. Individual claudin-2 molecules interact weakly, although the cumulative effect of these interactions is significant. This effect results in a transient decrease of average vesicle dispersivity and reduction in transport speed of claudin-2-functionalised vesicles. Polymer vesicles functionalised with claudin-2 were perfused through a microfluidic channel and the time taken to traverse a defined distance within the channel was measured. Functionalised vesicles took 1.19 to 1.69 times longer to traverse this distance than unfunctionalised ones. Coating the channel walls with protein A and incubating the vesicles with anti-claudin-2 antibodies prior to perfusion resulted in the functionalised vesicles taking 1.75 to 2.5 times longer to traverse this distance compared to the controls. The data show that our system is able to detect weak as well as strong protein-protein interactions. This system offers researchers a portable, easily operated and customizable platform for the study of weak protein-protein interactions, particularly between membrane proteins.

  9. Weak Acid Ionization Constants and the Determination of Weak Acid-Weak Base Reaction Equilibrium Constants in the General Chemistry Laboratory

    ERIC Educational Resources Information Center

    Nyasulu, Frazier; McMills, Lauren; Barlag, Rebecca

    2013-01-01

    A laboratory to determine the equilibrium constants of weak acid negative weak base reactions is described. The equilibrium constants of component reactions when multiplied together equal the numerical value of the equilibrium constant of the summative reaction. The component reactions are weak acid ionization reactions, weak base hydrolysis…

  10. Prospective regularization design in prior-image-based reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2015-12-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  11. Self-prior strategy for organ reconstruction in fluorescence molecular tomography

    PubMed Central

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-01-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094

  12. Self-prior strategy for organ reconstruction in fluorescence molecular tomography.

    PubMed

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-10-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.

  13. Weak values, 'negative probability', and the uncertainty principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokolovski, D.

    2007-10-15

    A quantum transition can be seen as a result of interference between various pathways (e.g., Feynman paths), which can be labeled by a variable f. An attempt to determine the value of f without destroying the coherence between the pathways produces a weak value of f. We show f to be an average obtained with an amplitude distribution which can, in general, take negative values, which, in accordance with the uncertainty principle, need not contain information about the actual range of f which contributes to the transition. It is also demonstrated that the moments of such alternating distributions have amore » number of unusual properties which may lead to a misinterpretation of the weak-measurement results. We provide a detailed analysis of weak measurements with and without post-selection. Examples include the double-slit diffraction experiment, weak von Neumann and von Neumann-like measurements, traversal time for an elastic collision, phase time, and local angular momentum.« less

  14. Bayesian bivariate meta-analysis of diagnostic test studies with interpretable priors.

    PubMed

    Guo, Jingyi; Riebler, Andrea; Rue, Håvard

    2017-08-30

    In a bivariate meta-analysis, the number of diagnostic studies involved is often very low so that frequentist methods may result in problems. Using Bayesian inference is particularly attractive as informative priors that add a small amount of information can stabilise the analysis without overwhelming the data. However, Bayesian analysis is often computationally demanding and the selection of the prior for the covariance matrix of the bivariate structure is crucial with little data. The integrated nested Laplace approximations method provides an efficient solution to the computational issues by avoiding any sampling, but the important question of priors remain. We explore the penalised complexity (PC) prior framework for specifying informative priors for the variance parameters and the correlation parameter. PC priors facilitate model interpretation and hyperparameter specification as expert knowledge can be incorporated intuitively. We conduct a simulation study to compare the properties and behaviour of differently defined PC priors to currently used priors in the field. The simulation study shows that the PC prior seems beneficial for the variance parameters. The use of PC priors for the correlation parameter results in more precise estimates when specified in a sensible neighbourhood around the truth. To investigate the usage of PC priors in practice, we reanalyse a meta-analysis using the telomerase marker for the diagnosis of bladder cancer and compare the results with those obtained by other commonly used modelling approaches. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Explanation and Prior Knowledge Interact to Guide Learning

    ERIC Educational Resources Information Center

    Williams, Joseph J.; Lombrozo, Tania

    2013-01-01

    How do explaining and prior knowledge contribute to learning? Four experiments explored the relationship between explanation and prior knowledge in category learning. The experiments independently manipulated whether participants were prompted to explain the category membership of study observations and whether category labels were informative in…

  16. Generalized multiple kernel learning with data-dependent priors.

    PubMed

    Mao, Qi; Tsang, Ivor W; Gao, Shenghua; Wang, Li

    2015-06-01

    Multiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views.

  17. Information display: the weak link for NCW

    NASA Astrophysics Data System (ADS)

    Gilger, Mike

    2006-05-01

    The Global Information Grid (GIG) enables the dissemination of real-time data from any sensor/source as well as the distribution of that data immediately to recipients across the globe, resulting in better, faster, and more accurate decisions, reduced operational risk, and a more competitive war-fighting advantage. As a major component of Network Centric Warfare (NCW), the GIG seeks to provide the integrated information infrastructure necessary to connect the robust data streams from ConstellationNet, FORCENet, and LandWarNet to allow Joint Forces to move beyond Situational Awareness and into Situational Understanding. NCW will provide the Joint Forces a common situational understanding, a common operating picture, and any and all information necessary for rapid decision-making. However, with the exception of the 1994 introduction of the Military Standard 2525 "Common Warfighting Symbology," there has been no notable improvement in our ability to display information for accurate and rapid understanding. In fact, one of the notable problems associated with NCW is how to process the massive amount of newly integrated data being thrown at the warfighter: a significant human-machine interface challenge. The solution; a graphical language called GIFIC (Graphical Interface for Information Cognition) that can display thousands of data points simultaneously. Coupled with the new generation COP displays, GIFIC provides for the tremendous amounts of information-display required for effective NCW battlespace awareness requirements, offering instant insight into joint operations, tactical situations, and targeting necessities. GIFIC provides the next level of information-display necessary for a successful NCW, resulting in agile, high-performance, and highly competitive warfighters.

  18. Iterative Region-of-Interest Reconstruction from Limited Data Using Prior Information

    NASA Astrophysics Data System (ADS)

    Vogelgesang, Jonas; Schorr, Christian

    2017-12-01

    In practice, computed tomography and computed laminography applications suffer from incomplete data. In particular, when inspecting large objects with extremely different diameters in longitudinal and transversal directions or when high resolution reconstructions are desired, the physical conditions of the scanning system lead to restricted data and truncated projections, also known as the interior or region-of-interest (ROI) problem. To recover the searched-for density function of the inspected object, we derive a semi-discrete model of the ROI problem that inherently allows the incorporation of geometrical prior information in an abstract Hilbert space setting for bounded linear operators. Assuming that the attenuation inside the object is approximately constant, as for fibre reinforced plastics parts or homogeneous objects where one is interested in locating defects like cracks or porosities, we apply the semi-discrete Landweber-Kaczmarz method to recover the inner structure of the object inside the ROI from the measured data resulting in a semi-discrete iteration method. Finally, numerical experiments for three-dimensional tomographic applications with both an inherent restricted source and ROI problem are provided to verify the proposed method for the ROI reconstruction.

  19. Constrained Deep Weak Supervision for Histopathology Image Segmentation.

    PubMed

    Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan

    2017-11-01

    In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.

  20. Weak values in continuous weak measurements of qubits

    NASA Astrophysics Data System (ADS)

    Qin, Lupei; Liang, Pengfei; Li, Xin-Qi

    2015-07-01

    For continuous weak measurements of qubits, we obtain exact expressions for weak values (WVs) from the postselection restricted average of measurement outputs, by using both the quantum-trajectory equation (QTE) and the quantum Bayesian approach. The former is applicable to short-time weak measurement, while the latter can relax the measurement strength to finite. We find that even in the "very" weak limit the result can be essentially different from the one originally proposed by Aharonov, Albert, and Vaidman (AAV), in the sense that our result incorporates nonperturbative correction which could be important when the AAV WV is large. Within the Bayesian framework, we obtain also elegant expressions for finite measurement strength and find that the amplifier's noise in quantum measurement has no effect on the WVs. In particular, we obtain very useful results for homodyne measurement in a circuit-QED system, which allows for measuring the real and imaginary parts of the AAV WV by simply tuning the phase of the local oscillator. This advantage can be exploited as an efficient state-tomography technique.

  1. Intra-articular steroid injection for osteoarthritis of the hip prior to total hip arthroplasty : is it safe? a systematic review.

    PubMed

    Pereira, L C; Kerr, J; Jolles, B M

    2016-08-01

    Using a systematic review, we investigated whether there is an increased risk of post-operative infection in patients who have received an intra-articular corticosteroid injection to the hip for osteoarthritis prior to total hip arthroplasty (THA). Studies dealing with an intra-articular corticosteroid injection to the hip and infection following subsequent THA were identified from databases for the period between 1990 to 2013. Retrieved articles were independently assessed for their methodological quality. A total of nine studies met the inclusion criteria. Two recommended against a steroid injection prior to THA and seven found no risk with an injection. No prospective controlled trials were identified. Most studies were retrospective. Lack of information about the methodology was a consistent flaw. The literature in this area is scarce and the evidence is weak. Most studies were retrospective, and confounding factors were poorly defined or not addressed. There is thus currently insufficient evidence to conclude that an intra-articular corticosteroid injection administered prior to THA increases the rate of infection. High quality, multicentre randomised trials are needed to address this issue. Cite this article: Bone Joint J 2016;98-B:1027-35. ©2016 The British Editorial Society of Bone & Joint Surgery.

  2. Proportion estimation using prior cluster purities

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    The prior distribution of CLASSY component purities is studied, and this information incorporated into maximum likelihood crop proportion estimators. The method is tested on Transition Year spring small grain segments.

  3. The role of Bs-->Kπ in determining the weak phase /γ

    NASA Astrophysics Data System (ADS)

    Gronau, M.; Rosner, J. L.

    2000-06-01

    The decay rates for B0-->K+π-, B+-->K0π+, and the charge-conjugate processes were found to provide information on the weak phase γ≡Arg(Vub*) when the ratio r of weak tree and penguin amplitudes was taken from data on /B-->ππ or semileptonic /B-->π decays. We show here that the rates for Bs-->K-π+ and B¯s-->K+π- can provide the necessary information on r, and estimate the statistical accuracy of forthcoming measurements at the Fermilab Tevatron.

  4. Experimental investigations of weak definite and weak indefinite noun phrases

    PubMed Central

    Klein, Natalie M.; Gegg-Harrison, Whitney M.; Carlson, Greg N.; Tanenhaus, Michael K.

    2013-01-01

    Definite noun phrases typically refer to entities that are uniquely identifiable in the speaker and addressee’s common ground. Some definite noun phrases (e.g. the hospital in Mary had to go the hospital and John did too) seem to violate this uniqueness constraint. We report six experiments that were motivated by the hypothesis that these “weak definite” interpretations arise in “incorporated” constructions. Experiments 1-3 compared nouns that seem to allow for a weak definite interpretation (e.g. hospital, bank, bus, radio) with those that do not (e.g. farm, concert, car, book). Experiments 1 and 2 used an instruction-following task and picture-judgment task, respectively, to demonstrate that a weak definite need not uniquely refer. In Experiment 3 participants imagined scenarios described by sentences such as The Federal Express driver had to go to the hospital/farm. The imagined scenarios following weak definite noun phrases were more likely to include conventional activities associated with the object, whereas following regular nouns, participants were more likely to imagine scenarios that included typical activities associated with the subject; similar effects were observed with weak indefinites. Experiment 4 found that object-related activities were reduced when the same subject and object were used with a verb that does not license weak definite interpretations. In Experiment 5, a science fiction story introduced an artificial lexicon for novel concepts. Novel nouns that shared conceptual properties with English weak definite nouns were more likely to allow weak reference in a judgment task. Experiment 6 demonstrated that familiarity for definite articles and anti- familiarity for indefinite articles applies to the activity associated with the noun, consistent with predictions made by the incorporation analysis. PMID:23685208

  5. Postselected weak measurement beyond the weak value

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geszti, Tamas

    2010-04-15

    Closed expressions are derived for the quantum measurement statistics of pre- and postselected Gaussian particle beams. The weakness of the preselection step is shown to compete with the nonorthogonality of postselection in a transparent way. The approach is shown to be useful in analyzing postselection-based signal amplification, allowing measurements to be extended far beyond the range of validity of the well-known Aharonov-Albert-Vaidman limit. Additionally, the present treatment connects postselected weak measurement to the topic of phase-contrast microscopy.

  6. Protecting quantum Fisher information in curved space-time

    NASA Astrophysics Data System (ADS)

    Huang, Zhiming

    2018-03-01

    In this work, we investigate the quantum Fisher information (QFI) dynamics of a two-level atom interacting with quantized conformally coupled massless scalar fields in de Sitter-invariant vacuum. We first derive the master equation that governs its evolution. It is found that the QFI decays with evolution time. Furthermore, we propose two schemes to protect QFI by employing prior weak measurement (WM) and post measurement reversal (MR). We find that the first scheme can not always protect QFI and the second scheme has prominent advantage over the first scheme.

  7. WE-G-207-07: Iterative CT Shading Correction Method with No Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Niu, T

    2015-06-15

    Purpose: Shading artifacts are caused by scatter contamination, beam hardening effects and other non-ideal imaging condition. Our Purpose is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT imaging (e.g., cone-beam CT, low-kVp CT) without relying on prior information. Methods: Our method applies general knowledge of the relatively uniform CT number distribution in one tissue component. Image segmentation is applied to construct template image where each structure is filled with the same CT number of that specific tissue. By subtracting the ideal template from CT image, the residual from various error sources are generated.more » Since the forward projection is an integration process, the non-continuous low-frequency shading artifacts in the image become continuous and low-frequency signals in the line integral. Residual image is thus forward projected and its line integral is filtered using Savitzky-Golay filter to estimate the error. A compensation map is reconstructed on the error using standard FDK algorithm and added to the original image to obtain the shading corrected one. Since the segmentation is not accurate on shaded CT image, the proposed scheme is iterated until the variation of residual image is minimized. Results: The proposed method is evaluated on a Catphan600 phantom, a pelvic patient and a CT angiography scan for carotid artery assessment. Compared to the one without correction, our method reduces the overall CT number error from >200 HU to be <35 HU and increases the spatial uniformity by a factor of 1.4. Conclusion: We propose an effective iterative algorithm for shading correction in CT imaging. Being different from existing algorithms, our method is only assisted by general anatomical and physical information in CT imaging without relying on prior knowledge. Our method is thus practical and attractive as a general solution to CT shading correction. This work is supported by the National

  8. Using cross correlations to calibrate lensing source redshift distributions: Improving cosmological constraints from upcoming weak lensing surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Putter, Roland; Doré, Olivier; Das, Sudeep

    2014-01-10

    Cross correlations between the galaxy number density in a lensing source sample and that in an overlapping spectroscopic sample can in principle be used to calibrate the lensing source redshift distribution. In this paper, we study in detail to what extent this cross-correlation method can mitigate the loss of cosmological information in upcoming weak lensing surveys (combined with a cosmic microwave background prior) due to lack of knowledge of the source distribution. We consider a scenario where photometric redshifts are available and find that, unless the photometric redshift distribution p(z {sub ph}|z) is calibrated very accurately a priori (bias andmore » scatter known to ∼0.002 for, e.g., EUCLID), the additional constraint on p(z {sub ph}|z) from the cross-correlation technique to a large extent restores the cosmological information originally lost due to the uncertainty in dn/dz(z). Considering only the gain in photo-z accuracy and not the additional cosmological information, enhancements of the dark energy figure of merit of up to a factor of four (40) can be achieved for a SuMIRe-like (EUCLID-like) combination of lensing and redshift surveys, where SuMIRe stands for Subaru Measurement of Images and Redshifts). However, the success of the method is strongly sensitive to our knowledge of the galaxy bias evolution in the source sample and we find that a percent level bias prior is needed to optimize the gains from the cross-correlation method (i.e., to approach the cosmology constraints attainable if the bias was known exactly).« less

  9. 21 CFR 1.280 - How must you submit prior notice?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... to FDA. You must submit all prior notice information in the English language, except that an... Commercial System (ABI/ACS); or (2) The FDA PNSI at http://www.access.fda.gov. You must submit prior notice through the FDA Prior Notice System Interface (FDA PNSI) for articles of food imported or offered for...

  10. Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.

    PubMed

    Fennell, John; Baddeley, Roland

    2012-10-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  11. Efficient structure from motion on large scenes using UAV with position and pose information

    NASA Astrophysics Data System (ADS)

    Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang

    2018-04-01

    In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.

  12. Calibrated birth-death phylogenetic time-tree priors for bayesian inference.

    PubMed

    Heled, Joseph; Drummond, Alexei J

    2015-05-01

    Here we introduce a general class of multiple calibration birth-death tree priors for use in Bayesian phylogenetic inference. All tree priors in this class separate ancestral node heights into a set of "calibrated nodes" and "uncalibrated nodes" such that the marginal distribution of the calibrated nodes is user-specified whereas the density ratio of the birth-death prior is retained for trees with equal values for the calibrated nodes. We describe two formulations, one in which the calibration information informs the prior on ranked tree topologies, through the (conditional) prior, and the other which factorizes the prior on divergence times and ranked topologies, thus allowing uniform, or any arbitrary prior distribution on ranked topologies. Although the first of these formulations has some attractive properties, the algorithm we present for computing its prior density is computationally intensive. However, the second formulation is always faster and computationally efficient for up to six calibrations. We demonstrate the utility of the new class of multiple-calibration tree priors using both small simulations and a real-world analysis and compare the results to existing schemes. The two new calibrated tree priors described in this article offer greater flexibility and control of prior specification in calibrated time-tree inference and divergence time dating, and will remove the need for indirect approaches to the assessment of the combined effect of calibration densities and tree priors in Bayesian phylogenetic inference. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  13. Hartman effect and weak measurements that are not really weak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokolovski, D.; IKERBASQUE, Basque Foundation for Science, Alameda Urquijo, 36-5, Plaza Bizkaia, 48011, Bilbao, Bizkaia; Akhmatskaya, E.

    2011-08-15

    We show that in wave packet tunneling, localization of the transmitted particle amounts to a quantum measurement of the delay it experiences in the barrier. With no external degree of freedom involved, the envelope of the wave packet plays the role of the initial pointer state. Under tunneling conditions such ''self-measurement'' is necessarily weak, and the Hartman effect just reflects the general tendency of weak values to diverge, as postselection in the final state becomes improbable. We also demonstrate that it is a good precision, or a 'not really weak' quantum measurement: no matter how wide the barrier d, itmore » is possible to transmit a wave packet with a width {sigma} small compared to the observed advancement. As is the case with all weak measurements, the probability of transmission rapidly decreases with the ratio {sigma}/d.« less

  14. Q weak: First direct measurement of the proton’s weak charge

    DOE PAGES

    Androic, D.; Armstrong, D. S.; Asaturyan, A.; ...

    2017-03-22

    The Q weak experiment, which took data at Jefferson Lab in the period 2010 - 2012, will precisely determine the weak charge of the proton by measuring the parity-violating asymmetry in elastic e-p scattering at 1.1 GeV using a longitudinally polarized electron beam and a liquid hydrogen target at a low momentum transfer of Q 2 = 0.025 (GeV/c) 2. The weak charge of the proton is predicted by the Standard Model and any significant deviation would indicate physics beyond the Standard Model. The technical challenges and experimental apparatus for measuring the weak charge of the proton will be discussed,more » as well as the method of extracting the weak charge of the proton. Finally, the results from a small subset of the data, that has been published, will also be presented. Furthermore an update will be given of the current status of the data analysis.« less

  15. Q weak: First direct measurement of the proton’s weak charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Androic, D.; Armstrong, D. S.; Asaturyan, A.

    The Q weak experiment, which took data at Jefferson Lab in the period 2010 - 2012, will precisely determine the weak charge of the proton by measuring the parity-violating asymmetry in elastic e-p scattering at 1.1 GeV using a longitudinally polarized electron beam and a liquid hydrogen target at a low momentum transfer of Q 2 = 0.025 (GeV/c) 2. The weak charge of the proton is predicted by the Standard Model and any significant deviation would indicate physics beyond the Standard Model. The technical challenges and experimental apparatus for measuring the weak charge of the proton will be discussed,more » as well as the method of extracting the weak charge of the proton. Finally, the results from a small subset of the data, that has been published, will also be presented. Furthermore an update will be given of the current status of the data analysis.« less

  16. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    PubMed

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  17. Practical Weak-lensing Shear Measurement with Metacalibration

    DOE PAGES

    Sheldon, Erin S.; Huff, Eric M.

    2017-05-19

    We report that metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate-sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observemore » that for images with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five-percent level. In each simulation we applied a small few-percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Finally, using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.« less

  18. Practical Weak-lensing Shear Measurement with Metacalibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheldon, Erin S.; Huff, Eric M.

    2017-05-20

    Metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate-sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observe that for imagesmore » with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five-percent level. In each simulation we applied a small few-percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.« less

  19. Self-calibration of photometric redshift scatter in weak-lensing surveys

    DOE PAGES

    Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary

    2010-06-11

    Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less

  20. Weak Measurement and Quantum Smoothing of a Superconducting Qubit

    NASA Astrophysics Data System (ADS)

    Tan, Dian

    In quantum mechanics, the measurement outcome of an observable in a quantum system is intrinsically random, yielding a probability distribution. The state of the quantum system can be described by a density matrix rho(t), which depends on the information accumulated until time t, and represents our knowledge about the system. The density matrix rho(t) gives probabilities for the outcomes of measurements at time t. Further probing of the quantum system allows us to refine our prediction in hindsight. In this thesis, we experimentally examine a quantum smoothing theory in a superconducting qubit by introducing an auxiliary matrix E(t) which is conditioned on information obtained from time t to a final time T. With the complete information before and after time t, the pair of matrices [rho(t), E(t)] can be used to make smoothed predictions for the measurement outcome at time t. We apply the quantum smoothing theory in the case of continuous weak measurement unveiling the retrodicted quantum trajectories and weak values. In the case of strong projective measurement, while the density matrix rho(t) with only diagonal elements in a given basis |n〉 may be treated as a classical mixture, we demonstrate a failure of this classical mixture description in determining the smoothed probabilities for the measurement outcome at time t with both diagonal rho(t) and diagonal E(t). We study the correlations between quantum states and weak measurement signals and examine aspects of the time symmetry of continuous quantum measurement. We also extend our study of quantum smoothing theory to the case of resonance fluorescence of a superconducting qubit with homodyne measurement and observe some interesting effects such as the modification of the excited state probabilities, weak values, and evolution of the predicted and retrodicted trajectories.

  1. Dark-matter particles without weak-scale masses or weak interactions.

    PubMed

    Feng, Jonathan L; Kumar, Jason

    2008-12-05

    We propose that dark matter is composed of particles that naturally have the correct thermal relic density, but have neither weak-scale masses nor weak interactions. These models emerge naturally from gauge-mediated supersymmetry breaking, where they elegantly solve the dark-matter problem. The framework accommodates single or multiple component dark matter, dark-matter masses from 10 MeV to 10 TeV, and interaction strengths from gravitational to strong. These candidates enhance many direct and indirect signals relative to weakly interacting massive particles and have qualitatively new implications for dark-matter searches and cosmological implications for colliders.

  2. Weak quadrupole moments

    NASA Astrophysics Data System (ADS)

    Lackenby, B. G. C.; Flambaum, V. V.

    2018-07-01

    We introduce the weak quadrupole moment (WQM) of nuclei, related to the quadrupole distribution of the weak charge in the nucleus. The WQM produces a tensor weak interaction between the nucleus and electrons and can be observed in atomic and molecular experiments measuring parity nonconservation. The dominating contribution to the weak quadrupole is given by the quadrupole moment of the neutron distribution, therefore, corresponding experiments should allow one to measure the neutron quadrupoles. Using the deformed oscillator model and the Schmidt model we calculate the quadrupole distributions of neutrons, Q n , the WQMs, {Q}W(2), and the Lorentz invariance violating energy shifts in 9Be, 21Ne, 27Al, 131Xe, 133Cs, 151Eu, 153Eu, 163Dy, 167Er, 173Yb, 177Hf, 179Hf, 181Ta, 201Hg and 229Th.

  3. Robust Bayesian hypocentre and uncertainty region estimation: the effect of heavy-tailed distributions and prior information in cases with poor, inconsistent and insufficient arrival times

    NASA Astrophysics Data System (ADS)

    Martinsson, J.

    2013-03-01

    We propose methods for robust Bayesian inference of the hypocentre in presence of poor, inconsistent and insufficient phase arrival times. The objectives are to increase the robustness, the accuracy and the precision by introducing heavy-tailed distributions and an informative prior distribution of the seismicity. The effects of the proposed distributions are studied under real measurement conditions in two underground mine networks and validated using 53 blasts with known hypocentres. To increase the robustness against poor, inconsistent or insufficient arrivals, a Gaussian Mixture Model is used as a hypocentre prior distribution to describe the seismically active areas, where the parameters are estimated based on previously located events in the region. The prior is truncated to constrain the solution to valid geometries, for example below the ground surface, excluding known cavities, voids and fractured zones. To reduce the sensitivity to outliers, different heavy-tailed distributions are evaluated to model the likelihood distribution of the arrivals given the hypocentre and the origin time. Among these distributions, the multivariate t-distribution is shown to produce the overall best performance, where the tail-mass adapts to the observed data. Hypocentre and uncertainty region estimates are based on simulations from the posterior distribution using Markov Chain Monte Carlo techniques. Velocity graphs (equivalent to traveltime graphs) are estimated using blasts from known locations, and applied to reduce the main uncertainties and thereby the final estimation error. To focus on the behaviour and the performance of the proposed distributions, a basic single-event Bayesian procedure is considered in this study for clarity. Estimation results are shown with different distributions, with and without prior distribution of seismicity, with wrong prior distribution, with and without error compensation, with and without error description, with insufficient arrival

  4. Cortical plasticity as a mechanism for storing Bayesian priors in sensory perception.

    PubMed

    Köver, Hania; Bao, Shaowen

    2010-05-05

    Human perception of ambiguous sensory signals is biased by prior experiences. It is not known how such prior information is encoded, retrieved and combined with sensory information by neurons. Previous authors have suggested dynamic encoding mechanisms for prior information, whereby top-down modulation of firing patterns on a trial-by-trial basis creates short-term representations of priors. Although such a mechanism may well account for perceptual bias arising in the short-term, it does not account for the often irreversible and robust changes in perception that result from long-term, developmental experience. Based on the finding that more frequently experienced stimuli gain greater representations in sensory cortices during development, we reasoned that prior information could be stored in the size of cortical sensory representations. For the case of auditory perception, we use a computational model to show that prior information about sound frequency distributions may be stored in the size of primary auditory cortex frequency representations, read-out by elevated baseline activity in all neurons and combined with sensory-evoked activity to generate a perception that conforms to Bayesian integration theory. Our results suggest an alternative neural mechanism for experience-induced long-term perceptual bias in the context of auditory perception. They make the testable prediction that the extent of such perceptual prior bias is modulated by both the degree of cortical reorganization and the magnitude of spontaneous activity in primary auditory cortex. Given that cortical over-representation of frequently experienced stimuli, as well as perceptual bias towards such stimuli is a common phenomenon across sensory modalities, our model may generalize to sensory perception, rather than being specific to auditory perception.

  5. Effects of Prior Knowledge on Memory: Implications for Education

    ERIC Educational Resources Information Center

    Shing, Yee Lee; Brod, Garvin

    2016-01-01

    The encoding, consolidation, and retrieval of events and facts form the basis for acquiring new skills and knowledge. Prior knowledge can enhance those memory processes considerably and thus foster knowledge acquisition. But prior knowledge can also hinder knowledge acquisition, in particular when the to-be-learned information is inconsistent with…

  6. Can quantum probes satisfy the weak equivalence principle?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seveso, Luigi, E-mail: luigi.seveso@unimi.it; Paris, Matteo G.A.; INFN, Sezione di Milano, I-20133 Milano

    We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’smore » mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.« less

  7. "Weak-Center" Gentrification and the Contradictions of Containment: deconcentrating poverty in downtown Los Angeles.

    PubMed

    Reese, Ellen; DeVerteuil, Geoffrey; Thach, Leanne

    2010-01-01

    This case study of recent efforts to deconcentrate poverty within the Skid Row area of Los Angeles examines processes of "weak-center" gentrification as it applies to a "service dependent ghetto," thus filling two key gaps in prior scholarship. We document the collaboration between the government, business and development interests, and certain non-profit agencies in this process and identify two key mechanisms of poverty deconcentration: housing/service displacement and the criminalization of low income residents. Following Harvey, we argue that these efforts are driven by pressures to find a "spatial fix" for capital accumulation through Downtown redevelopment. This process has been hotly contested, however, illustrating the strength of counter-pressures to gentrification/poverty deconcentration within "weak-center" urban areas.

  8. Randomised prior feedback modulates neural signals of outcome monitoring.

    PubMed

    Mushtaq, Faisal; Wilkie, Richard M; Mon-Williams, Mark A; Schaefer, Alexandre

    2016-01-15

    Substantial evidence indicates that decision outcomes are typically evaluated relative to expectations learned from relatively long sequences of previous outcomes. This mechanism is thought to play a key role in general learning and adaptation processes but relatively little is known about the determinants of outcome evaluation when the capacity to learn from series of prior events is difficult or impossible. To investigate this issue, we examined how the feedback-related negativity (FRN) is modulated by information briefly presented before outcome evaluation. The FRN is a brain potential time-locked to the delivery of decision feedback and it is widely thought to be sensitive to prior expectations. We conducted a multi-trial gambling task in which outcomes at each trial were fully randomised to minimise the capacity to learn from long sequences of prior outcomes. Event-related potentials for outcomes (Win/Loss) in the current trial (Outcomet) were separated according to the type of outcomes that occurred in the preceding two trials (Outcomet-1 and Outcomet-2). We found that FRN voltage was more positive during the processing of win feedback when it was preceded by wins at Outcomet-1 compared to win feedback preceded by losses at Outcomet-1. However, no influence of preceding outcomes was found on FRN activity relative to the processing of loss feedback. We also found no effects of Outcomet-2 on FRN amplitude relative to current feedback. Additional analyses indicated that this effect was largest for trials in which participants selected a decision different to the gamble chosen in the previous trial. These findings are inconsistent with models that solely relate the FRN to prediction error computation. Instead, our results suggest that if stable predictions about future events are weak or non-existent, then outcome processing can be determined by affective systems. More specifically, our results indicate that the FRN is likely to reflect the activity of positive

  9. Randomised prior feedback modulates neural signals of outcome monitoring

    PubMed Central

    Mushtaq, Faisal; Wilkie, Richard M.; Mon-Williams, Mark A.; Schaefer, Alexandre

    2016-01-01

    Substantial evidence indicates that decision outcomes are typically evaluated relative to expectations learned from relatively long sequences of previous outcomes. This mechanism is thought to play a key role in general learning and adaptation processes but relatively little is known about the determinants of outcome evaluation when the capacity to learn from series of prior events is difficult or impossible. To investigate this issue, we examined how the feedback-related negativity (FRN) is modulated by information briefly presented before outcome evaluation. The FRN is a brain potential time-locked to the delivery of decision feedback and it is widely thought to be sensitive to prior expectations. We conducted a multi-trial gambling task in which outcomes at each trial were fully randomised to minimise the capacity to learn from long sequences of prior outcomes. Event-related potentials for outcomes (Win/Loss) in the current trial (Outcomet) were separated according to the type of outcomes that occurred in the preceding two trials (Outcomet-1 and Outcomet-2). We found that FRN voltage was more positive during the processing of win feedback when it was preceded by wins at Outcomet-1 compared to win feedback preceded by losses at Outcomet-1. However, no influence of preceding outcomes was found on FRN activity relative to the processing of loss feedback. We also found no effects of Outcomet-2 on FRN amplitude relative to current feedback. Additional analyses indicated that this effect was largest for trials in which participants selected a decision different to the gamble chosen in the previous trial. These findings are inconsistent with models that solely relate the FRN to prediction error computation. Instead, our results suggest that if stable predictions about future events are weak or non-existent, then outcome processing can be determined by affective systems. More specifically, our results indicate that the FRN is likely to reflect the activity of positive

  10. Comparing hard and soft prior bounds in geophysical inverse problems

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describes the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.

  11. Comparing hard and soft prior bounds in geophysical inverse problems

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1987-01-01

    In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describeds the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.

  12. Weak mixing below the weak scale in dark-matter direct detection

    NASA Astrophysics Data System (ADS)

    Brod, Joachim; Grinstein, Benjamin; Stamou, Emmanuel; Zupan, Jure

    2018-02-01

    If dark matter couples predominantly to the axial-vector currents with heavy quarks, the leading contribution to dark-matter scattering on nuclei is either due to one-loop weak corrections or due to the heavy-quark axial charges of the nucleons. We calculate the effects of Higgs and weak gauge-boson exchanges for dark matter coupling to heavy-quark axial-vector currents in an effective theory below the weak scale. By explicit computation, we show that the leading-logarithmic QCD corrections are important, and thus resum them to all orders using the renormalization group.

  13. Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.

    PubMed

    Mulder, Joris

    2014-02-01

    Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.

  14. Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations

    NASA Technical Reports Server (NTRS)

    Mantz, A.; Allen, S. W.

    2011-01-01

    Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.

  15. Why is the Weak Force Weak?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    The subatomic world is governed by three known forces, each with vastly different energy. In this video, Fermilab’s Dr. Don Lincoln takes on the weak nuclear force and shows why it is so much weaker than the other known forces.

  16. Multi-object segmentation using coupled nonparametric shape and relative pose priors

    NASA Astrophysics Data System (ADS)

    Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep

    2009-02-01

    We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.

  17. Weakly Supervised Dictionary Learning

    NASA Astrophysics Data System (ADS)

    You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub

    2018-05-01

    We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.

  18. Control on frontal thrust progression by the mechanically weak Gondwana horizon in the Darjeeling-Sikkim Himalaya

    NASA Astrophysics Data System (ADS)

    Ghosh, Subhajit; Bose, Santanu; Mandal, Nibir; Das, Animesh

    2018-03-01

    This study integrates field evidence with laboratory experiments to show the mechanical effects of a lithologically contrasting stratigraphic sequence on the development of frontal thrusts: Main Boundary Thrust (MBT) and Daling Thrust (DT) in the Darjeeling-Sikkim Himalaya (DSH). We carried out field investigations mainly along two river sections in the DSH: Tista-Kalijhora and Mahanadi, covering an orogen-parallel stretch of 20 km. Our field observations suggest that the coal-shale dominated Gondwana sequence (sandwiched between the Daling Group in the north and Siwaliks in the south) has acted as a mechanically weak horizon to localize the MBT and DT. We simulated a similar mechanical setting in scaled model experiments to validate our field interpretation. In experiments, such a weak horizon at a shallow depth perturbs the sequential thrust progression, and causes a thrust to localize in the vicinity of the weak zone, splaying from the basal detachment. We correlate this weak-zone-controlled thrust with the DT, which accommodates a large shortening prior to activation of the weak zone as a new detachment with ongoing horizontal shortening. The entire shortening in the model is then transferred to this shallow detachment to produce a new sequence of thrust splays. Extrapolating this model result to the natural prototype, we show that the mechanically weak Gondwana Sequence has caused localization of the DT and MBT in the mountain front of DSH.

  19. Acquired prior knowledge modulates audiovisual integration.

    PubMed

    Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A

    2010-05-01

    Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.

  20. Weak-light Phase-locking for LISA

    NASA Technical Reports Server (NTRS)

    McNamara, Paul W.

    2004-01-01

    The long armlengths of the LISA interferometer, and the finite aperture of the telescope, leads to an optical power attenuation of approximately equal to 10(exp -10) of the transmitted to received light. Simple reflection at the end of the arm is therefore not an optimum interferometric design. Instead, a local laser is offset phase-locked to the weak incoming beam, transferring the phase information of the incoming to the outgoing light. This paper reports on an experiment to characterize a weak light phase-locking scheme suitable for LISA in which a diode-pumped, Nd:YAG, non-planar ring oscillator (NPRO) is offset phase-locked to a low power (13pW) frequency stabilised master NPRO. Preliminary results of the relative phase noise of the slave laser shows shot noise limited performance above 0.4 Hz. Excess noise is observed at lower frequencies, most probably due to thermal effects in the optical arrangement and phase sensing electronics.

  1. Weakness

    MedlinePlus

    ... ALS) Weakness of the muscles of the face ( Bell palsy ) Group of disorders involving brain and nervous system ... them ( myasthenia gravis ) Polio Home Care Follow the treatment your health care provider recommends to treat the ...

  2. Compatibility between weak gel and microorganisms in weak gel-assisted microbial enhanced oil recovery.

    PubMed

    Qi, Yi-Bin; Zheng, Cheng-Gang; Lv, Cheng-Yuan; Lun, Zeng-Min; Ma, Tao

    2018-03-20

    To investigate weak gel-assisted microbial flooding in Block Wang Long Zhuang in the Jiangsu Oilfield, the compatibility of weak gel and microbe was evaluated using laboratory experiments. Bacillus sp. W5 was isolated from the formation water in Block Wang Long Zhuang. The rate of oil degradation reached 178 mg/day, and the rate of viscosity reduction reached 75.3%. Strain W5 could produce lipopeptide with a yield of 1254 mg/L. Emulsified crude oil was dispersed in the microbial degradation system, and the average diameter of the emulsified oil particles was 18.54 μm. Bacillus sp. W5 did not affect the rheological properties of the weak gel, and the presence of the weak gel did not significantly affect bacterial reproduction (as indicated by an unchanged microbial biomass), emulsification (surface tension is 35.56 mN/m and average oil particles size is 21.38 μm), oil degradation (162 mg/day) and oil viscosity reduction (72.7%). Core-flooding experiments indicated oil recovery of 23.6% when both weak gel and Bacillus sp. W5 were injected into the system, 14.76% when only the weak gel was injected, and 9.78% with strain W5 was injected without the weak gel. The results demonstrate good compatibility between strains W5 and the weak gel and highlight the application potential of weak gel-assisted microbial flooding. Copyright © 2018 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  3. Entanglement-Enhanced Phase Estimation without Prior Phase Information

    NASA Astrophysics Data System (ADS)

    Colangelo, G.; Martin Ciurana, F.; Puentes, G.; Mitchell, M. W.; Sewell, R. J.

    2017-06-01

    We study the generation of planar quantum squeezed (PQS) states by quantum nondemolition (QND) measurement of an ensemble of Rb 87 atoms with a Poisson distributed atom number. Precise calibration of the QND measurement allows us to infer the conditional covariance matrix describing the Fy and Fz components of the PQS states, revealing the dual squeezing characteristic of PQS states. PQS states have been proposed for single-shot phase estimation without prior knowledge of the likely values of the phase. We show that for an arbitrary phase, the generated PQS states can give a metrological advantage of at least 3.1 dB relative to classical states. The PQS state also beats, for most phase angles, single-component-squeezed states generated by QND measurement with the same resources and atom number statistics. Using spin squeezing inequalities, we show that spin-spin entanglement is responsible for the metrological advantage.

  4. Weak bump quasars

    NASA Technical Reports Server (NTRS)

    Wilkes, B. J.; Mcdowell, J.

    1994-01-01

    Research into the optical, ultraviolet and infrared continuum emission from quasars and their host galaxies was carried out. The main results were the discovery of quasars with unusually weak infrared emission and the construction of a quantitative estimate of the dispersion in quasar continuum properties. One of the major uncertainties in the measurement of quasar continuum strength is the contribution to the continuum of the quasar host galaxy as a function of wavelength. Continuum templates were constructed for different types of host galaxy and individual estimates made of the decomposed quasar and host continua based on existing observations of the target quasars. The results are that host galaxy contamination is worse than previously suspected, and some apparent weak bump quasars are really normal quasars with strong host galaxies. However, the existence of true weak bump quasars such as PHL 909 was confirmed. The study of the link between the bump strength and other wavebands was continued by comparing with IRAS data. There is evidence that excess far infrared radiation is correlated with weaker ultraviolet bumps. This argues against an orientation effect and implies a probable link with the host galaxy environment, for instance the presence of a luminous starburst. However, the evidence still favors the idea that reddening is not important in those objects with ultraviolet weak bumps. The same work has led to the discovery of a class of infrared weak quasars. Pushing another part of the envelope of quasar continuum parameter space, the IR-weak quasars have implications for understanding the effects of reddening internal to the quasars, the reality of ultraviolet turnovers, and may allow further tests of the Phinney dust model for the IR continuum. They will also be important objects for studying the claimed IR to x-ray continuum correlation.

  5. Use of Very Weak Radiation Sources to Determine Aircraft Runway Position

    NASA Technical Reports Server (NTRS)

    Drinkwater, Fred J., III; Kibort, Bernard R.

    1965-01-01

    Various methods of providing runway information in the cockpit during the take-off and landing roll have been proposed. The most reliable method has been to use runway distance markers when visible. Flight tests were used to evaluate the feasibility of using weak radio-active sources to trigger a runway distance counter in the cockpit. The results of these tests indicate that a weak radioactive source would provide a reliable signal by which this indicator could be operated.

  6. History of Weak Interactions

    DOE R&D Accomplishments Database

    Lee, T. D.

    1970-07-01

    While the phenomenon of beta-decay was discovered near the end of the last century, the notion that the weak interaction forms a separate field of physical forces evolved rather gradually. This became clear only after the experimental discoveries of other weak reactions such as muon-decay, muon-capture, etc., and the theoretical observation that all these reactions can be described by approximately the same coupling constant, thus giving rise to the notion of a universal weak interaction. Only then did one slowly recognize that the weak interaction force forms an independent field, perhaps on the same footing as the gravitational force, the electromagnetic force, and the strong nuclear and sub-nuclear forces.

  7. A new prior for bayesian anomaly detection: application to biosurveillance.

    PubMed

    Shen, Y; Cooper, G F

    2010-01-01

    Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the

  8. Mechanisms underlying comprehension of health information in adulthood: the roles of prior knowledge and working memory capacity.

    PubMed

    Soederberg Miller, Lisa M; Gibson, Tanja N; Applegate, Elizabeth A; de Dios, Jeannette

    2011-07-01

    Prior knowledge, working memory capacity (WMC), and conceptual integration (attention allocated to integrating concepts in text) are critical within many contexts; however, their impact on the acquisition of health information (i.e. learning) is relatively unexplored.We examined how these factors impact learning about nutrition within a cross-sectional study of adults ages 18 to 81. Results showed that conceptual integration mediated the effects of knowledge and WMC on learning, confirming that attention to concepts while reading is important for learning about health. We also found that when knowledge was controlled, age declines in learning increased, suggesting that knowledge mitigates the effects of age on learning about nutrition.

  9. Using informative priors in facies inversion: The case of C-ISR method

    NASA Astrophysics Data System (ADS)

    Valakas, G.; Modis, K.

    2016-08-01

    Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.

  10. Power Recycled Weak Value Based Metrology

    DTIC Science & Technology

    2015-04-29

    PAGE The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...cavity, one is able to more efficiently use the input laser power by increasing the total power inside the interferometer. In the context of these weak...EV I EW LE T T ER S week ending 1 MAY 2015 0031-9007=15=114(17)=170801(5) 170801-1 © 2015 American Physical Society We consider a continuous wave laser

  11. Implementing informative priors for heterogeneity in meta-analysis using meta-regression and pseudo data.

    PubMed

    Rhodes, Kirsty M; Turner, Rebecca M; White, Ian R; Jackson, Dan; Spiegelhalter, David J; Higgins, Julian P T

    2016-12-20

    Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  12. When does prior knowledge disproportionately benefit older adults’ memory?

    PubMed Central

    Badham, Stephen P.; Hay, Mhairi; Foxon, Natasha; Kaur, Kiran; Maylor, Elizabeth A.

    2016-01-01

    ABSTRACT Material consistent with knowledge/experience is generally more memorable than material inconsistent with knowledge/experience – an effect that can be more extreme in older adults. Four experiments investigated knowledge effects on memory with young and older adults. Memory for familiar and unfamiliar proverbs (Experiment 1) and for common and uncommon scenes (Experiment 2) showed similar knowledge effects across age groups. Memory for person-consistent and person-neutral actions (Experiment 3) showed a greater benefit of prior knowledge in older adults. For cued recall of related and unrelated word pairs (Experiment 4), older adults benefited more from prior knowledge only when it provided uniquely useful additional information beyond the episodic association itself. The current data and literature suggest that prior knowledge has the age-dissociable mnemonic properties of (1) improving memory for the episodes themselves (age invariant), and (2) providing conceptual information about the tasks/stimuli extrinsically to the actual episodic memory (particularly aiding older adults). PMID:26473767

  13. Agreement and Predictive Validity Using Less Conservative FNIH Sarcopenia Project Weakness Cutpoints

    PubMed Central

    Shaffer, Nancy Chiles; Ferrucci, Luigi; Shardell, Michelle; Simonsick, Eleanor M.; Studenski, Stephanie

    2016-01-01

    OBJECTIVES The FNIH Sarcopenia Project derived conservative definitions for weakness and low lean mass, resulting in low prevalence and low agreement with prior definitions. The FNIH Project also estimated a less conservative cutpoint for low grip strength, potentially yielding a cutpoint for low lean mass more consistent with the European Working Group on Sarcopenia in Older People (EWGSOP). We derived lean mass cutpoints based on the less conservative cutpoint for grip strength (WeakI), and assessed agreement with EWGSOP and prediction of incident slow walking and mortality. DESIGN, SETTING, PARTICIPANTS, MEASUREMENTS Longitudinal analysis of 287 men and 258 women from the Baltimore Longitudinal Study of Aging aged >65 years, with 2–10 years followup. Weakness was determined via hand dynamometer, appendicular lean mass (ALM) via DEXA, and slow walking by 6m usual pace walk <0.8m/s. Analyses used classification and regression tree analysis, Cohen’s Kappa, and Cox models. RESULTS Cutpoints derived from WeakI for ALM (ALMI) and ALM adjusted for body mass index (ALM/BMII) were (ALMI) <21.4kg (men) and <14.1kg (women); and (ALM/BMII) <0.725 (men) and <0.591 (women). Kappas with EWGSOP were (ALMI); 0.65 (men) and 0.75 (women) and ALM/BMII; 0.34 (men) and 0.47 (women). In men, the hazard ratio for incident slow walking by WeakI + ALMI was 2.44 (95% CI:1.02–5.82) versus 2.91 (95% CI:1.11–7.62) by EWGSOP. Neither approach predicted incident slow walking in women. CONCLUSION The ALMI cutpoints agree with EWGSOP and predict slow walking in men. Future studies should explore sex differences in the relationship between body composition and physical function and the impact of change in muscle mass on muscle strength and physical function. PMID:28024092

  14. A Regions of Confidence Based Approach to Enhance Segmentation with Shape Priors.

    PubMed

    Appia, Vikram V; Ganapathy, Balaji; Abufadel, Amer; Yezzi, Anthony; Faber, Tracy

    2010-01-18

    We propose an improved region based segmentation model with shape priors that uses labels of confidence/interest to exclude the influence of certain regions in the image that may not provide useful information for segmentation. These could be regions in the image which are expected to have weak, missing or corrupt edges or they could be regions in the image which the user is not interested in segmenting, but are part of the object being segmented. In the training datasets, along with the manual segmentations we also generate an auxiliary map indicating these regions of low confidence/interest. Since, all the training images are acquired under similar conditions, we can train our algorithm to estimate these regions as well. Based on this training we will generate a map which indicates the regions in the image that are likely to contain no useful information for segmentation. We then use a parametric model to represent the segmenting curve as a combination of shape priors obtained by representing the training data as a collection of signed distance functions. We evolve an objective energy functional to evolve the global parameters that are used to represent the curve. We vary the influence each pixel has on the evolution of these parameters based on the confidence/interest label. When we use these labels to indicate the regions with low confidence; the regions containing accurate edges will have a dominant role in the evolution of the curve and the segmentation in the low confidence regions will be approximated based on the training data. Since our model evolves global parameters, it improves the segmentation even in the regions with accurate edges. This is because we eliminate the influence of the low confidence regions which may mislead the final segmentation. Similarly when we use the labels to indicate the regions which are not of importance, we will get a better segmentation of the object in the regions we are interested in.

  15. Weak values in collision theory

    NASA Astrophysics Data System (ADS)

    de Castro, Leonardo Andreta; Brasil, Carlos Alexandre; Napolitano, Reginaldo de Jesus

    2018-05-01

    Weak measurements have an increasing number of applications in contemporary quantum mechanics. They were originally described as a weak interaction that slightly entangled the translational degrees of freedom of a particle to its spin, yielding surprising results after post-selection. That description often ignores the kinetic energy of the particle and its movement in three dimensions. Here, we include these elements and re-obtain the weak values within the context of collision theory by two different approaches, and prove that the results are compatible with each other and with the results from the traditional approach. To provide a more complete description, we generalize weak values into weak tensors and use them to provide a more realistic description of the Stern-Gerlach apparatus.

  16. Three-dimensional choroidal segmentation in spectral OCT volumes using optic disc prior information

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Girkin, Christopher A.; Hariri, Amirhossein; Sadda, SriniVas R.

    2016-03-01

    Recently, much attention has been focused on determining the role of the peripapillary choroid - the layer between the outer retinal pigment epithelium (RPE)/Bruchs membrane (BM) and choroid-sclera (C-S) junction, whether primary or secondary in the pathogenesis of glaucoma. However, the automated choroidal segmentation in spectral-domain optical coherence tomography (SD-OCT) images of optic nerve head (ONH) has not been reported probably due to the fact that the presence of the BM opening (BMO, corresponding to the optic disc) can deflect the choroidal segmentation from its correct position. The purpose of this study is to develop a 3D graph-based approach to identify the 3D choroidal layer in ONH-centered SD-OCT images using the BMO prior information. More specifically, an initial 3D choroidal segmentation was first performed using the 3D graph search algorithm. Note that varying surface interaction constraints based on the choroidal morphological model were applied. To assist the choroidal segmentation, two other surfaces of internal limiting membrane and innerouter segment junction were also segmented. Based on the segmented layer between the RPE/BM and C-S junction, a 2D projection map was created. The BMO in the projection map was detected by a 2D graph search. The pre-defined BMO information was then incorporated into the surface interaction constraints of the 3D graph search to obtain more accurate choroidal segmentation. Twenty SD-OCT images from 20 healthy subjects were used. The mean differences of the choroidal borders between the algorithm and manual segmentation were at a sub-voxel level, indicating a high level segmentation accuracy.

  17. The Effects of Prior Knowledge Activation on Free Recall and Study Time Allocation.

    ERIC Educational Resources Information Center

    Machiels-Bongaerts, Maureen; And Others

    The effects of mobilizing prior knowledge on information processing were studied. Two hypotheses, the cognitive set-point hypothesis and the selective attention hypothesis, try to account for the facilitation effects of prior knowledge activation. These hypotheses predict different recall patterns as a result of mobilizing prior knowledge. In…

  18. Major strengths and weaknesses of the lod score method.

    PubMed

    Ott, J

    2001-01-01

    Strengths and weaknesses of the lod score method for human genetic linkage analysis are discussed. The main weakness is its requirement for the specification of a detailed inheritance model for the trait. Various strengths are identified. For example, the lod score (likelihood) method has optimality properties when the trait to be studied is known to follow a Mendelian mode of inheritance. The ELOD is a useful measure for information content of the data. The lod score method can emulate various "nonparametric" methods, and this emulation is equivalent to the nonparametric methods. Finally, the possibility of building errors into the analysis will prove to be essential for the large amount of linkage and disequilibrium data expected in the near future.

  19. A Semi-Discrete Landweber-Kaczmarz Method for Cone Beam Tomography and Laminography Exploiting Geometric Prior Information

    NASA Astrophysics Data System (ADS)

    Vogelgesang, Jonas; Schorr, Christian

    2016-12-01

    We present a semi-discrete Landweber-Kaczmarz method for solving linear ill-posed problems and its application to Cone Beam tomography and laminography. Using a basis function-type discretization in the image domain, we derive a semi-discrete model of the underlying scanning system. Based on this model, the proposed method provides an approximate solution of the reconstruction problem, i.e. reconstructing the density function of a given object from its projections, in suitable subspaces equipped with basis function-dependent weights. This approach intuitively allows the incorporation of additional information about the inspected object leading to a more accurate model of the X-rays through the object. Also, physical conditions of the scanning geometry, like flat detectors in computerized tomography as used in non-destructive testing applications as well as non-regular scanning curves e.g. appearing in computed laminography (CL) applications, are directly taken into account during the modeling process. Finally, numerical experiments of a typical CL application in three dimensions are provided to verify the proposed method. The introduction of geometric prior information leads to a significantly increased image quality and superior reconstructions compared to standard iterative methods.

  20. Bayesian Markov Chain Monte Carlo inversion for weak anisotropy parameters and fracture weaknesses using azimuthal elastic impedance

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Pan, Xinpeng; Ji, Yuxin; Zhang, Guangzhi

    2017-08-01

    A system of aligned vertical fractures and fine horizontal shale layers combine to form equivalent orthorhombic media. Weak anisotropy parameters and fracture weaknesses play an important role in the description of orthorhombic anisotropy (OA). We propose a novel approach of utilizing seismic reflection amplitudes to estimate weak anisotropy parameters and fracture weaknesses from observed seismic data, based on azimuthal elastic impedance (EI). We first propose perturbation in stiffness matrix in terms of weak anisotropy parameters and fracture weaknesses, and using the perturbation and scattering function, we derive PP-wave reflection coefficient and azimuthal EI for the case of an interface separating two OA media. Then we demonstrate an approach to first use a model constrained damped least-squares algorithm to estimate azimuthal EI from partially incidence-phase-angle-stack seismic reflection data at different azimuths, and then extract weak anisotropy parameters and fracture weaknesses from the estimated azimuthal EI using a Bayesian Markov Chain Monte Carlo inversion method. In addition, a new procedure to construct rock physics effective model is presented to estimate weak anisotropy parameters and fracture weaknesses from well log interpretation results (minerals and their volumes, porosity, saturation, fracture density, etc.). Tests on synthetic and real data indicate that unknown parameters including elastic properties (P- and S-wave impedances and density), weak anisotropy parameters and fracture weaknesses can be estimated stably in the case of seismic data containing a moderate noise, and our approach can make a reasonable estimation of anisotropy in a fractured shale reservoir.

  1. Does anticipatory sweating occur prior to fluid consumption?

    PubMed

    Wing, David; McClintock, Rebecca; Plumlee, Deva; Rathke, Michelle; Burnett, Tim; Lyons, Bailey; Buono, Michael J

    2012-01-01

    The purpose of this study was to examine if anticipatory sweating occurs prior to fluid consumption in dehydrated subjects. It was hypothesized that there would first be an anticipatory response to the sight of water, and then with drinking, a second response caused by mechanical stimulation of oropharyngeal nerves. Dehydrated subjects (n=19) sat in a heat chamber for 30 minutes. At minute 15, a resistance hygrometer capsule was attached and sweat rate was measured every 3 seconds. At minute 35:00, a researcher entered the room with previously measured water (2 ml/kg euhydrated body weight). At minute 35:30, the subject was allowed to drink. Data collection continued for 5 minutes post consumption. As expected, 16 of the 19 subjects responded to oropharyngeal stimuli with increased sweat rate. However, the new finding was that a majority (12 of 19) also showed an anticipatory sweating response prior to fluid consumption. Subjects were divided into 4 groups based on the magnitude of the sweating response. Strong responders' (n=4) anticipatory response accounted for 50% or more of the total change in sweat rate. Moderate responders' (n=4) anticipatory response accounted for 20%-49%. Weak responders' (n=4) anticipatory response accounted for 6-20%. Finally, non-responders (n=7) showed no anticipatory response. Although previously noted anecdotally in the literature, the current study is the first to demonstrate that measurable anticipatory sweating occurs prior to fluid intake in dehydrated subjects in a significant percentage of the population. Such data suggests that cerebral input, like oropharyngeal stimulation, can temporarily remove the dehydration-induced inhibition of sweating.

  2. Data Structures in Natural Computing: Databases as Weak or Strong Anticipatory Systems

    NASA Astrophysics Data System (ADS)

    Rossiter, B. N.; Heather, M. A.

    2004-08-01

    Information systems anticipate the real world. Classical databases store, organise and search collections of data of that real world but only as weak anticipatory information systems. This is because of the reductionism and normalisation needed to map the structuralism of natural data on to idealised machines with von Neumann architectures consisting of fixed instructions. Category theory developed as a formalism to explore the theoretical concept of naturality shows that methods like sketches arising from graph theory as only non-natural models of naturality cannot capture real-world structures for strong anticipatory information systems. Databases need a schema of the natural world. Natural computing databases need the schema itself to be also natural. Natural computing methods including neural computers, evolutionary automata, molecular and nanocomputing and quantum computation have the potential to be strong. At present they are mainly at the stage of weak anticipatory systems.

  3. Prefrontal Engagement during Source Memory Retrieval Depends on the Prior Encoding Task

    PubMed Central

    Kuo, Trudy Y.; Van Petten, Cyma

    2008-01-01

    The prefrontal cortex is strongly engaged by some, but not all, episodic memory tests. Prior work has shown that source recognition tests—those that require memory for conjunctions of studied attributes—yield deficient performance in patients with prefrontal damage and greater prefrontal activity in healthy subjects, as compared to simple recognition tests. Here, we tested the hypothesis that there is no intrinsic relationship between the prefrontal cortex and source memory, but that the prefrontal cortex is engaged by the demand to retrieve weakly encoded relationships. Subjects attempted to remember object/color conjunctions after an encoding task that focused on object identity alone, and an integrative encoding task that encouraged attention to object/color relationships. After the integrative encoding task, the late prefrontal brain electrical activity that typically occurs in source memory tests was eliminated. Earlier brain electrical activity related to successful recognition of the objects was unaffected by the nature of prior encoding. PMID:16839287

  4. Quantum counterfactual communication without a weak trace

    NASA Astrophysics Data System (ADS)

    Arvidsson-Shukur, D. R. M.; Barnes, C. H. W.

    2016-12-01

    The classical theories of communication rely on the assumption that there has to be a flow of particles from Bob to Alice in order for him to send a message to her. We develop a quantum protocol that allows Alice to perceive Bob's message "counterfactually"; that is, without Alice receiving any particles that have interacted with Bob. By utilizing a setup built on results from interaction-free measurements, we outline a communication protocol whereby the information travels in the opposite direction of the emitted particles. In comparison to previous attempts on such protocols, this one is such that a weak measurement at the message source would not leave a weak trace that could be detected by Alice's receiver. While some interaction-free schemes require a large number of carefully aligned beam splitters, our protocol is realizable with two or more beam splitters. We demonstrate this protocol by numerically solving the time-dependent Schrödinger equation for a Hamiltonian that implements this quantum counterfactual phenomenon.

  5. Hypernuclear Weak Decays

    NASA Astrophysics Data System (ADS)

    Itonaga, K.; Motoba, T.

    The recent theoretical studies of Lambda-hypernuclear weak decaysof the nonmesonic and pi-mesonic ones are developed with the aim to disclose the link between the experimental decay observables and the underlying basic weak decay interactions and the weak decay mechanisms. The expressions of the nonmesonic decay rates Gamma_{nm} and the decay asymmetry parameter alpha_1 of protons from the polarized hypernuclei are presented in the shell model framework. We then introduce the meson theoretical Lambda N -> NN interactions which include the one-meson exchanges, the correlated-2pi exchanges, and the chiral-pair-meson exchanges. The features of meson exchange potentials and their roles on the nonmesonic decays are discussed. With the adoption of the pi + 2pi/rho + 2pi/sigma + omega + K + rhopi/a_1 + sigmapi/a_1 exchange potentials, we have carried out the systematic calculations of the nonmesonic decay observables for light-to-heavy hypernuclei. The present model can account for the available experimental data of the decay rates, Gamma_n/Gamma_p ratios, and the intrinsic asymmetry parameters alpha_Lambda (alpha_Lambda is related to alpha_1) of emitted protons well and consistently within the error bars. The hypernuclear lifetimes are evaluated by converting the total weak decay rates Gamma_{tot} = Gamma_pi + Gamma_{nm} to tau, which exhibit saturation property for the hypernuclear mass A ≥ 30 and agree grossly well with experimental data for the mass range from light to heavy hypernuclei except for the very light ones. Future extensions of the model and the remaining problems are also mentioned. The pi-mesonic weak processes are briefly surveyed, and the calculations and predictions are compared and confirmed by the recent high precision FINUDA pi-mesonic decay data. This shows that the theoretical basis seems to be firmly grounded.

  6. Standard Anatomic Terminologies: Comparison for Use in a Health Information Exchange–Based Prior Computed Tomography (CT) Alerting System

    PubMed Central

    Lowry, Tina; Vreeman, Daniel J; Loo, George T; Delman, Bradley N; Thum, Frederick L; Slovis, Benjamin H; Shapiro, Jason S

    2017-01-01

    Background A health information exchange (HIE)–based prior computed tomography (CT) alerting system may reduce avoidable CT imaging by notifying ordering clinicians of prior relevant studies when a study is ordered. For maximal effectiveness, a system would alert not only for prior same CTs (exams mapped to the same code from an exam name terminology) but also for similar CTs (exams mapped to different exam name terminology codes but in the same anatomic region) and anatomically proximate CTs (exams in adjacent anatomic regions). Notification of previous same studies across an HIE requires mapping of local site CT codes to a standard terminology for exam names (such as Logical Observation Identifiers Names and Codes [LOINC]) to show that two studies with different local codes and descriptions are equivalent. Notifying of prior similar or proximate CTs requires an additional mapping of exam codes to anatomic regions, ideally coded by an anatomic terminology. Several anatomic terminologies exist, but no prior studies have evaluated how well they would support an alerting use case. Objective The aim of this study was to evaluate the fitness of five existing standard anatomic terminologies to support similar or proximate alerts of an HIE-based prior CT alerting system. Methods We compared five standard anatomic terminologies (Foundational Model of Anatomy, Systematized Nomenclature of Medicine Clinical Terms, RadLex, LOINC, and LOINC/Radiological Society of North America [RSNA] Radiology Playbook) to an anatomic framework created specifically for our use case (Simple ANatomic Ontology for Proximity or Similarity [SANOPS]), to determine whether the existing terminologies could support our use case without modification. On the basis of an assessment of optimal terminology features for our purpose, we developed an ordinal anatomic terminology utility classification. We mapped samples of 100 random and the 100 most frequent LOINC CT codes to anatomic regions in each

  7. Contribution of prior semantic knowledge to new episodic learning in amnesia.

    PubMed

    Kan, Irene P; Alexander, Michael P; Verfaellie, Mieke

    2009-05-01

    We evaluated whether prior semantic knowledge would enhance episodic learning in amnesia. Subjects studied prices that are either congruent or incongruent with prior price knowledge for grocery and household items and then performed a forced-choice recognition test for the studied prices. Consistent with a previous report, healthy controls' performance was enhanced by price knowledge congruency; however, only a subset of amnesic patients experienced the same benefit. Whereas patients with relatively intact semantic systems, as measured by an anatomical measure (i.e., lesion involvement of anterior and lateral temporal lobes), experienced a significant congruency benefit, patients with compromised semantic systems did not experience a congruency benefit. Our findings suggest that when prior knowledge structures are intact, they can support acquisition of new episodic information by providing frameworks into which such information can be incorporated.

  8. Prior probability modulates anticipatory activity in category-specific areas.

    PubMed

    Trapp, Sabrina; Lepsien, Jöran; Kotz, Sonja A; Bar, Moshe

    2016-02-01

    Bayesian models are currently a dominant framework for describing human information processing. However, it is not clear yet how major tenets of this framework can be translated to brain processes. In this study, we addressed the neural underpinning of prior probability and its effect on anticipatory activity in category-specific areas. Before fMRI scanning, participants were trained in two behavioral sessions to learn the prior probability and correct order of visual events within a sequence. The events of each sequence included two different presentations of a geometric shape and one picture of either a house or a face, which appeared with either a high or a low likelihood. Each sequence was preceded by a cue that gave participants probabilistic information about which items to expect next. This allowed examining cue-related anticipatory modulation of activity as a function of prior probability in category-specific areas (fusiform face area and parahippocampal place area). Our findings show that activity in the fusiform face area was higher when faces had a higher prior probability. The finding of a difference between levels of expectations is consistent with graded, probabilistically modulated activity, but the data do not rule out the alternative explanation of a categorical neural response. Importantly, these differences were only visible during anticipation, and vanished at the time of stimulus presentation, calling for a functional distinction when considering the effects of prior probability. Finally, there were no anticipatory effects for houses in the parahippocampal place area, suggesting sensitivity to stimulus material when looking at effects of prediction.

  9. Science Literacy and Prior Knowledge of Astronomy MOOC Students

    NASA Astrophysics Data System (ADS)

    Impey, Chris David; Buxner, Sanlyn; Wenger, Matthew; Formanek, Martin

    2018-01-01

    Many of science classes offered on Coursera fall into fall into the category of general education or general interest classes for lifelong learners, including our own, Astronomy: Exploring Time and Space. Very little is known about the backgrounds and prior knowledge of these students. In this talk we present the results of a survey of our Astronomy MOOC students. We also compare these results to our previous work on undergraduate students in introductory astronomy courses. Survey questions examined student demographics and motivations as well as their science and information literacy (including basic science knowledge, interest, attitudes and beliefs, and where they get their information about science). We found that our MOOC students are different than the undergraduate students in more ways than demographics. Many MOOC students demonstrated high levels of science and information literacy. With a more comprehensive understanding of our students’ motivations and prior knowledge about science and how they get their information about science, we will be able to develop more tailored learning experiences for these lifelong learners.

  10. Joint weak value for all order coupling using continuous variable and qubit probe

    NASA Astrophysics Data System (ADS)

    Kumari, Asmita; Pan, Alok Kumar; Panigrahi, Prasanta K.

    2017-11-01

    The notion of weak measurement in quantum mechanics has gained a significant and wide interest in realizing apparently counterintuitive quantum effects. In recent times, several theoretical and experimental works have been reported for demonstrating the joint weak value of two observables where the coupling strength is restricted to the second order. In this paper, we extend such a formulation by providing a complete treatment of joint weak measurement scenario for all-order-coupling for the observable satisfying A 2 = 𝕀 and A 2 = A, which allows us to reveal several hitherto unexplored features. By considering the probe state to be discrete as well as continuous variable, we demonstrate how the joint weak value can be inferred for any given strength of the coupling. A particularly interesting result we pointed out that even if the initial pointer state is uncorrelated, the single pointer displacement can provide the information about the joint weak value, if at least third order of the coupling is taken into account. As an application of our scheme, we provide an all-order-coupling treatment of the well-known Hardy paradox by considering the continuous as well as discrete meter states and show how the negative joint weak probabilities emerge in the quantum paradoxes at the weak coupling limit.

  11. Role of Weak Measurements on States Ordering and Monogamy of Quantum Correlation

    NASA Astrophysics Data System (ADS)

    Hu, Ming-Liang; Fan, Heng; Tian, Dong-Ping

    2015-01-01

    The information-theoretic definition of quantum correlation, e.g., quantum discord, is measurement dependent. By considering the more general quantum measurements, weak measurements, which include the projective measurement as a limiting case, we show that while weak measurements can enable one to capture more quantumness of correlation in a state, it can also induce other counterintuitive quantum effects. Specifically, we show that the general measurements with different strengths can impose different orderings for quantum correlations of some states. It can also modify the monogamous character for certain classes of states as well which may diminish the usefulness of quantum correlation as a resource in some protocols. In this sense, we say that the weak measurements play a dual role in defining quantum correlation.

  12. Probabilistic cosmological mass mapping from weak lensing shear

    DOE PAGES

    Schneider, M. D.; Ng, K. Y.; Dawson, W. A.; ...

    2017-04-10

    Here, we infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear ormore » Gaussian-distributed shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.« less

  13. Probabilistic Cosmological Mass Mapping from Weak Lensing Shear

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, M. D.; Dawson, W. A.; Ng, K. Y.

    2017-04-10

    We infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear or Gaussian-distributedmore » shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.« less

  14. When Generating Answers Benefits Arithmetic Skill: The Importance of Prior Knowledge

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Kmicikewycz, Alexander Oleksij

    2008-01-01

    People remember information better if they generate the information while studying rather than read the information. However, prior research has not investigated whether this generation effect extends to related but unstudied items and has not been conducted in classroom settings. We compared third graders' success on studied and unstudied…

  15. 78 FR 56242 - Agency Information Collection Activities: Prior Disclosure

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-12

    ... information (total capital/startup costs and operations and maintenance costs). The comments that are... information collected. Type of Review: Extension (without change). Affected Public: Businesses. Estimated...

  16. The interaction of Bayesian priors and sensory data and its neural circuit implementation in visually-guided movement

    PubMed Central

    Yang, Jin; Lee, Joonyeol; Lisberger, Stephen G.

    2012-01-01

    Sensory-motor behavior results from a complex interaction of noisy sensory data with priors based on recent experience. By varying the stimulus form and contrast for the initiation of smooth pursuit eye movements in monkeys, we show that visual motion inputs compete with two independent priors: one prior biases eye speed toward zero; the other prior attracts eye direction according to the past several days’ history of target directions. The priors bias the speed and direction of the initiation of pursuit for the weak sensory data provided by the motion of a low-contrast sine wave grating. However, the priors have relatively little effect on pursuit speed and direction when the visual stimulus arises from the coherent motion of a high-contrast patch of dots. For any given stimulus form, the mean and variance of eye speed co-vary in the initiation of pursuit, as expected for signal-dependent noise. This relationship suggests that pursuit implements a trade-off between movement accuracy and variation, reducing both when the sensory signals are noisy. The tradeoff is implemented as a competition of sensory data and priors that follows the rules of Bayesian estimation. Computer simulations show that the priors can be understood as direction specific control of the strength of visual-motor transmission, and can be implemented in a neural-network model that makes testable predictions about the population response in the smooth eye movement region of the frontal eye fields. PMID:23223286

  17. Methods and Models for the Construction of Weakly Parallel Tests. Research Report 90-4.

    ERIC Educational Resources Information Center

    Adema, Jos J.

    Methods are proposed for the construction of weakly parallel tests, that is, tests with the same test information function. A mathematical programing model for constructing tests with a prespecified test information function and a heuristic for assigning items to tests such that their information functions are equal play an important role in the…

  18. Weak hard X-ray emission from broad absorption line quasars: evidence for intrinsic X-ray weakness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, B.; Brandt, W. N.; Scott, A. E.

    We report NuSTAR observations of a sample of six X-ray weak broad absorption line (BAL) quasars. These targets, at z = 0.148-1.223, are among the optically brightest and most luminous BAL quasars known at z < 1.3. However, their rest-frame ≈2 keV luminosities are 14 to >330 times weaker than expected for typical quasars. Our results from a pilot NuSTAR study of two low-redshift BAL quasars, a Chandra stacking analysis of a sample of high-redshift BAL quasars, and a NuSTAR spectral analysis of the local BAL quasar Mrk 231 have already suggested the existence of intrinsically X-ray weak BAL quasars,more » i.e., quasars not emitting X-rays at the level expected from their optical/UV emission. The aim of the current program is to extend the search for such extraordinary objects. Three of the six new targets are weakly detected by NuSTAR with ≲ 45 counts in the 3-24 keV band, and the other three are not detected. The hard X-ray (8-24 keV) weakness observed by NuSTAR requires Compton-thick absorption if these objects have nominal underlying X-ray emission. However, a soft stacked effective photon index (Γ{sub eff} ≈ 1.8) for this sample disfavors Compton-thick absorption in general. The uniform hard X-ray weakness observed by NuSTAR for this and the pilot samples selected with <10 keV weakness also suggests that the X-ray weakness is intrinsic in at least some of the targets. We conclude that the NuSTAR observations have likely discovered a significant population (≳ 33%) of intrinsically X-ray weak objects among the BAL quasars with significantly weak <10 keV emission. We suggest that intrinsically X-ray weak quasars might be preferentially observed as BAL quasars.« less

  19. Knowledge Modeling in Prior Art Search

    NASA Astrophysics Data System (ADS)

    Graf, Erik; Frommholz, Ingo; Lalmas, Mounia; van Rijsbergen, Keith

    This study explores the benefits of integrating knowledge representations in prior art patent retrieval. Key to the introduced approach is the utilization of human judgment available in the form of classifications assigned to patent documents. The paper first outlines in detail how a methodology for the extraction of knowledge from such an hierarchical classification system can be established. Further potential ways of integrating this knowledge with existing Information Retrieval paradigms in a scalable and flexible manner are investigated. Finally based on these integration strategies the effectiveness in terms of recall and precision is evaluated in the context of a prior art search task for European patents. As a result of this evaluation it can be established that in general the proposed knowledge expansion techniques are particularly beneficial to recall and, with respect to optimizing field retrieval settings, further result in significant precision gains.

  20. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    NASA Astrophysics Data System (ADS)

    Jeffrey, N.; Abdalla, F. B.; Lahav, O.; Lanusse, F.; Starck, J.-L.; Leonard, A.; Kirk, D.; Chang, C.; Baxter, E.; Kacprzak, T.; Seitz, S.; Vikram, V.; Whiteway, L.; Abbott, T. M. C.; Allam, S.; Avila, S.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; De Vicente, J.; Desai, S.; Doel, P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Hartley, W. G.; Honscheid, K.; Hoyle, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Lima, M.; Lin, H.; March, M.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Reil, K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.

    2018-05-01

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated ΛCDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals' concentration is improved 17% by GLIMPSE and 18% by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.

  1. Weak decays of heavy hadrons into dynamically generated resonances

    DOE PAGES

    Oset, Eulogio; Liang, Wei -Hong; Bayar, Melahat; ...

    2016-01-28

    In this study, we present a review of recent works on weak decay of heavy mesons and baryons with two mesons, or a meson and a baryon, interacting strongly in the final state. The aim is to learn about the interaction of hadrons and how some particular resonances are produced in the reactions. It is shown that these reactions have peculiar features and act as filters for some quantum numbers which allow to identify easily some resonances and learn about their nature. The combination of basic elements of the weak interaction with the framework of the chiral unitary approach allowmore » for an interpretation of results of many reactions and add a novel information to different aspects of the hadron interaction and the properties of dynamically generated resonances.« less

  2. Characterizing Volumetric Strain at Brady Hot Springs, Nevada, USA Using Geodetic Data, Numerical Models, and Prior Information

    NASA Astrophysics Data System (ADS)

    Reinisch, E. C.; Feigl, K. L.; Cardiff, M. A.; Morency, C.; Kreemer, C.; Akerley, J.

    2017-12-01

    Time-dependent deformation has been observed at Brady Hot Springs using data from the Global Positioning System (GPS) and interferometric synthetic aperture radar (InSAR) [e.g., Ali et al. 2016, http://dx.doi.org/10.1016/j.geothermics.2016.01.008]. We seek to determine the geophysical process governing the observed subsidence. As two end-member hypotheses, we consider thermal contraction and a decrease in pore fluid pressure. A decrease in temperature would cause contraction in the subsurface and subsidence at the surface. A decrease in pore fluid pressure would allow the volume of pores to shrink and also produce subsidence. To simulate these processes, we use a dislocation model that assumes uniform elastic properties in a half space [Okada, 1985]. The parameterization consists of many cubic volume elements (voxels), each of which contracts by closing its three mutually orthogonal bisecting square surfaces. Then we use linear inversion to solve for volumetric strain in each voxel given a measurement of range change. To differentiate between the two possible hypotheses, we use a Bayesian framework with geostatistical prior information. We perform inversion using each prior to decide if one leads to a more geophysically reasonable interpretation than the other. This work is part of a project entitled "Poroelastic Tomography by Adjoint Inverse Modeling of Data from Seismology, Geodesy, and Hydrology" and is supported by the Geothermal Technology Office of the U.S. Department of Energy [DE-EE0006760].

  3. Development of E-info gene(ca): a website providing computer-tailored information and question prompt prior to breast cancer genetic counseling.

    PubMed

    Albada, Akke; van Dulmen, Sandra; Otten, Roel; Bensing, Jozien M; Ausems, Margreet G E M

    2009-08-01

    This article describes the stepwise development of the website 'E-info gene(ca)'. The website provides counselees in breast cancer genetic counseling with computer-tailored information and a question prompt prior to their first consultation. Counselees generally do not know what to expect from genetic counseling and they tend to have a passive role, receiving large amounts of relatively standard information. Using the "intervention mapping approach," we developed E-info gene(ca) aiming to enhance counselees' realistic expectations and participation during genetic counseling. The information on this website is tailored to counselees' individual situation (e.g., the counselee's age and cancer history). The website covers the topics of the genetic counseling process, breast cancer risk, meaning of being a carrier of a cancer gene mutation, emotional consequences and hereditary breast cancer. Finally, a question prompt encourages counselees to prepare questions for their genetic counseling visit.

  4. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  5. Looking forwards and backwards: The real-time processing of Strong and Weak Crossover

    PubMed Central

    Lidz, Jeffrey; Phillips, Colin

    2017-01-01

    We investigated the processing of pronouns in Strong and Weak Crossover constructions as a means of probing the extent to which the incremental parser can use syntactic information to guide antecedent retrieval. In Experiment 1 we show that the parser accesses a displaced wh-phrase as an antecedent for a pronoun when no grammatical constraints prohibit binding, but the parser ignores the same wh-phrase when it stands in a Strong Crossover relation to the pronoun. These results are consistent with two possibilities. First, the parser could apply Principle C at antecedent retrieval to exclude the wh-phrase on the basis of the c-command relation between its gap and the pronoun. Alternatively, retrieval might ignore any phrases that do not occupy an Argument position. Experiment 2 distinguished between these two possibilities by testing antecedent retrieval under Weak Crossover. In Weak Crossover binding of the pronoun is ruled out by the argument condition, but not Principle C. The results of Experiment 2 indicate that antecedent retrieval accesses matching wh-phrases in Weak Crossover configurations. On the basis of these findings we conclude that the parser can make rapid use of Principle C and c-command information to constrain retrieval. We discuss how our results support a view of antecedent retrieval that integrates inferences made over unseen syntactic structure into constraints on backward-looking processes like memory retrieval. PMID:28936483

  6. Balancing the Role of Priors in Multi-Observer Segmentation Evaluation

    PubMed Central

    Huang, Xiaolei; Wang, Wei; Lopresti, Daniel; Long, Rodney; Antani, Sameer; Xue, Zhiyun; Thoma, George

    2009-01-01

    Comparison of a group of multiple observer segmentations is known to be a challenging problem. A good segmentation evaluation method would allow different segmentations not only to be compared, but to be combined to generate a “true” segmentation with higher consensus. Numerous multi-observer segmentation evaluation approaches have been proposed in the literature, and STAPLE in particular probabilistically estimates the true segmentation by optimal combination of observed segmentations and a prior model of the truth. An Expectation–Maximization (EM) algorithm, STAPLE’S convergence to the desired local minima depends on good initializations for the truth prior and the observer-performance prior. However, accurate modeling of the initial truth prior is nontrivial. Moreover, among the two priors, the truth prior always dominates so that in certain scenarios when meaningful observer-performance priors are available, STAPLE can not take advantage of that information. In this paper, we propose a Bayesian decision formulation of the problem that permits the two types of prior knowledge to be integrated in a complementary manner in four cases with differing application purposes: (1) with known truth prior; (2) with observer prior; (3) with neither truth prior nor observer prior; and (4) with both truth prior and observer prior. The third and fourth cases are not discussed (or effectively ignored) by STAPLE, and in our research we propose a new method to combine multiple-observer segmentations based on the maximum a posterior (MAP) principle, which respects the observer prior regardless of the availability of the truth prior. Based on the four scenarios, we have developed a web-based software application that implements the flexible segmentation evaluation framework for digitized uterine cervix images. Experiment results show that our framework has flexibility in effectively integrating different priors for multi-observer segmentation evaluation and it also

  7. Estimating the weak-lensing rotation signal in radio cosmic shear surveys

    NASA Astrophysics Data System (ADS)

    Thomas, Daniel B.; Whittaker, Lee; Camera, Stefano; Brown, Michael L.

    2017-09-01

    Weak lensing has become an increasingly important tool in cosmology and the use of galaxy shapes to measure cosmic shear has become routine. The weak-lensing distortion tensor contains two other effects in addition to the two components of shear: the convergence and rotation. The rotation mode is not measurable using the standard cosmic shear estimators based on galaxy shapes, as there is no information on the original shapes of the images before they were lensed. Due to this, no estimator has been proposed for the rotation mode in cosmological weak-lensing surveys, and the rotation mode has never been constrained. Here, we derive an estimator for this quantity, which is based on the use of radio polarization measurements of the intrinsic position angles of galaxies. The rotation mode can be sourced by physics beyond Λ cold dark matter (ΛCDM), and also offers the chance to perform consistency checks of ΛCDM and of weak-lensing surveys themselves. We present simulations of this estimator and show that, for the pedagogical example of cosmic string spectra, this estimator could detect a signal that is consistent with the constraints from Planck. We examine the connection between the rotation mode and the shear B modes and thus how this estimator could help control systematics in future radio weak-lensing surveys.

  8. Topical video object discovery from key frames by modeling word co-occurrence prior.

    PubMed

    Zhao, Gangqiang; Yuan, Junsong; Hua, Gang; Yang, Jiong

    2015-12-01

    A topical video object refers to an object, that is, frequently highlighted in a video. It could be, e.g., the product logo and the leading actor/actress in a TV commercial. We propose a topic model that incorporates a word co-occurrence prior for efficient discovery of topical video objects from a set of key frames. Previous work using topic models, such as latent Dirichelet allocation (LDA), for video object discovery often takes a bag-of-visual-words representation, which ignored important co-occurrence information among the local features. We show that such data driven co-occurrence information from bottom-up can conveniently be incorporated in LDA with a Gaussian Markov prior, which combines top-down probabilistic topic modeling with bottom-up priors in a unified model. Our experiments on challenging videos demonstrate that the proposed approach can discover different types of topical objects despite variations in scale, view-point, color and lighting changes, or even partial occlusions. The efficacy of the co-occurrence prior is clearly demonstrated when compared with topic models without such priors.

  9. On modeling weak sinks in MODPATH

    USGS Publications Warehouse

    Abrams, Daniel B.; Haitjema, Henk; Kauffman, Leon J.

    2012-01-01

    Regional groundwater flow systems often contain both strong sinks and weak sinks. A strong sink extracts water from the entire aquifer depth, while a weak sink lets some water pass underneath or over the actual sink. The numerical groundwater flow model MODFLOW may allow a sink cell to act as a strong or weak sink, hence extracting all water that enters the cell or allowing some of that water to pass. A physical strong sink can be modeled by either a strong sink cell or a weak sink cell, with the latter generally occurring in low resolution models. Likewise, a physical weak sink may also be represented by either type of sink cell. The representation of weak sinks in the particle tracing code MODPATH is more equivocal than in MODFLOW. With the appropriate parameterization of MODPATH, particle traces and their associated travel times to weak sink streams can be modeled with adequate accuracy, even in single layer models. Weak sink well cells, on the other hand, require special measures as proposed in the literature to generate correct particle traces and individual travel times and hence capture zones. We found that the transit time distributions for well water generally do not require special measures provided aquifer properties are locally homogeneous and the well draws water from the entire aquifer depth, an important observation for determining the response of a well to non-point contaminant inputs.

  10. 76 FR 82315 - Agency Information Collection Activities: Prior Disclosure

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-30

    ...: Extension (without change). Affected Public: Businesses. Estimated Number of Respondents: 3,500. Estimated... collection be extended with no change to the burden hours or to the information collected. This document is... appropriate automated, electronic, mechanical, or other technological techniques or other forms of information...

  11. PET image reconstruction using multi-parametric anato-functional priors

    NASA Astrophysics Data System (ADS)

    Mehranian, Abolfazl; Belzunce, Martin A.; Niccolini, Flavia; Politis, Marios; Prieto, Claudia; Turkheimer, Federico; Hammers, Alexander; Reader, Andrew J.

    2017-08-01

    In this study, we investigate the application of multi-parametric anato-functional (MR-PET) priors for the maximum a posteriori (MAP) reconstruction of brain PET data in order to address the limitations of the conventional anatomical priors in the presence of PET-MR mismatches. In addition to partial volume correction benefits, the suitability of these priors for reconstruction of low-count PET data is also introduced and demonstrated, comparing to standard maximum-likelihood (ML) reconstruction of high-count data. The conventional local Tikhonov and total variation (TV) priors and current state-of-the-art anatomical priors including the Kaipio, non-local Tikhonov prior with Bowsher and Gaussian similarity kernels are investigated and presented in a unified framework. The Gaussian kernels are calculated using both voxel- and patch-based feature vectors. To cope with PET and MR mismatches, the Bowsher and Gaussian priors are extended to multi-parametric priors. In addition, we propose a modified joint Burg entropy prior that by definition exploits all parametric information in the MAP reconstruction of PET data. The performance of the priors was extensively evaluated using 3D simulations and two clinical brain datasets of [18F]florbetaben and [18F]FDG radiotracers. For simulations, several anato-functional mismatches were intentionally introduced between the PET and MR images, and furthermore, for the FDG clinical dataset, two PET-unique active tumours were embedded in the PET data. Our simulation results showed that the joint Burg entropy prior far outperformed the conventional anatomical priors in terms of preserving PET unique lesions, while still reconstructing functional boundaries with corresponding MR boundaries. In addition, the multi-parametric extension of the Gaussian and Bowsher priors led to enhanced preservation of edge and PET unique features and also an improved bias-variance performance. In agreement with the simulation results, the clinical results

  12. Analysis of weak interactions and Eotvos experiments

    NASA Technical Reports Server (NTRS)

    Hsu, J. P.

    1978-01-01

    The intermediate-vector-boson model is preferred over the current-current model as a basis for calculating effects due to weak self-energy. Attention is given to a possible violation of the equivalence principle by weak-interaction effects, and it is noted that effects due to weak self-energy are at least an order of magnitude greater than those due to the weak binding energy for typical nuclei. It is assumed that the weak and electromagnetic energies are independent.

  13. Automatic face naming by learning discriminative affinity matrices from weakly labeled images.

    PubMed

    Xiao, Shijie; Xu, Dong; Wu, Jianxin

    2015-10-01

    Given a collection of images, where each image contains several faces and is associated with a few names in the corresponding caption, the goal of face naming is to infer the correct name for each face. In this paper, we propose two new methods to effectively solve this problem by learning two discriminative affinity matrices from these weakly labeled images. We first propose a new method called regularized low-rank representation by effectively utilizing weakly supervised information to learn a low-rank reconstruction coefficient matrix while exploring multiple subspace structures of the data. Specifically, by introducing a specially designed regularizer to the low-rank representation method, we penalize the corresponding reconstruction coefficients related to the situations where a face is reconstructed by using face images from other subjects or by using itself. With the inferred reconstruction coefficient matrix, a discriminative affinity matrix can be obtained. Moreover, we also develop a new distance metric learning method called ambiguously supervised structural metric learning by using weakly supervised information to seek a discriminative distance metric. Hence, another discriminative affinity matrix can be obtained using the similarity matrix (i.e., the kernel matrix) based on the Mahalanobis distances of the data. Observing that these two affinity matrices contain complementary information, we further combine them to obtain a fused affinity matrix, based on which we develop a new iterative scheme to infer the name of each face. Comprehensive experiments demonstrate the effectiveness of our approach.

  14. Interactive lesion segmentation with shape priors from offline and online learning.

    PubMed

    Shepherd, Tony; Prince, Simon J D; Alexander, Daniel C

    2012-09-01

    In medical image segmentation, tumors and other lesions demand the highest levels of accuracy but still call for the highest levels of manual delineation. One factor holding back automatic segmentation is the exemption of pathological regions from shape modelling techniques that rely on high-level shape information not offered by lesions. This paper introduces two new statistical shape models (SSMs) that combine radial shape parameterization with machine learning techniques from the field of nonlinear time series analysis. We then develop two dynamic contour models (DCMs) using the new SSMs as shape priors for tumor and lesion segmentation. From training data, the SSMs learn the lower level shape information of boundary fluctuations, which we prove to be nevertheless highly discriminant. One of the new DCMs also uses online learning to refine the shape prior for the lesion of interest based on user interactions. Classification experiments reveal superior sensitivity and specificity of the new shape priors over those previously used to constrain DCMs. User trials with the new interactive algorithms show that the shape priors are directly responsible for improvements in accuracy and reductions in user demand.

  15. Prior knowledge guided active modules identification: an integrated multi-objective approach.

    PubMed

    Chen, Weiqi; Liu, Jing; He, Shan

    2017-03-14

    Active module, defined as an area in biological network that shows striking changes in molecular activity or phenotypic signatures, is important to reveal dynamic and process-specific information that is correlated with cellular or disease states. A prior information guided active module identification approach is proposed to detect modules that are both active and enriched by prior knowledge. We formulate the active module identification problem as a multi-objective optimisation problem, which consists two conflicting objective functions of maximising the coverage of known biological pathways and the activity of the active module simultaneously. Network is constructed from protein-protein interaction database. A beta-uniform-mixture model is used to estimate the distribution of p-values and generate scores for activity measurement from microarray data. A multi-objective evolutionary algorithm is used to search for Pareto optimal solutions. We also incorporate a novel constraints based on algebraic connectivity to ensure the connectedness of the identified active modules. Application of proposed algorithm on a small yeast molecular network shows that it can identify modules with high activities and with more cross-talk nodes between related functional groups. The Pareto solutions generated by the algorithm provides solutions with different trade-off between prior knowledge and novel information from data. The approach is then applied on microarray data from diclofenac-treated yeast cells to build network and identify modules to elucidate the molecular mechanisms of diclofenac toxicity and resistance. Gene ontology analysis is applied to the identified modules for biological interpretation. Integrating knowledge of functional groups into the identification of active module is an effective method and provides a flexible control of balance between pure data-driven method and prior information guidance.

  16. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    DOE PAGES

    Jeffrey, N.; Abdalla, F. B.; Lahav, O.; ...

    2018-05-15

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less

  17. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey, N.; et al.

    2018-01-26

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less

  18. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey, N.; Abdalla, F. B.; Lahav, O.

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less

  19. CLUMP-3D: Three-dimensional Shape and Structure of 20 CLASH Galaxy Clusters from Combined Weak and Strong Lensing

    NASA Astrophysics Data System (ADS)

    Chiu, I.-Non; Umetsu, Keiichi; Sereno, Mauro; Ettori, Stefano; Meneghetti, Massimo; Merten, Julian; Sayers, Jack; Zitrin, Adi

    2018-06-01

    We perform a three-dimensional triaxial analysis of 16 X-ray regular and 4 high-magnification galaxy clusters selected from the CLASH survey by combining two-dimensional weak-lensing and central strong-lensing constraints. In a Bayesian framework, we constrain the intrinsic structure and geometry of each individual cluster assuming a triaxial Navarro–Frenk–White halo with arbitrary orientations, characterized by the mass {M}200{{c}}, halo concentration {c}200{{c}}, and triaxial axis ratios ({q}{{a}}≤slant {q}{{b}}), and investigate scaling relations between these halo structural parameters. From triaxial modeling of the X-ray-selected subsample, we find that the halo concentration decreases with increasing cluster mass, with a mean concentration of {c}200{{c}}=4.82+/- 0.30 at the pivot mass {M}200{{c}}={10}15{M}ȯ {h}-1. This is consistent with the result from spherical modeling, {c}200{{c}}=4.51+/- 0.14. Independently of the priors, the minor-to-major axis ratio {q}{{a}} of our full sample exhibits a clear deviation from the spherical configuration ({q}{{a}}=0.52+/- 0.04 at {10}15{M}ȯ {h}-1 with uniform priors), with a weak dependence on the cluster mass. Combining all 20 clusters, we obtain a joint ensemble constraint on the minor-to-major axis ratio of {q}{{a}}={0.652}-0.078+0.162 and a lower bound on the intermediate-to-major axis ratio of {q}{{b}}> 0.63 at the 2σ level from an analysis with uniform priors. Assuming priors on the axis ratios derived from numerical simulations, we constrain the degree of triaxiality for the full sample to be { \\mathcal T }=0.79+/- 0.03 at {10}15{M}ȯ {h}-1, indicating a preference for a prolate geometry of cluster halos. We find no statistical evidence for an orientation bias ({f}geo}=0.93+/- 0.07), which is insensitive to the priors and in agreement with the theoretical expectation for the CLASH clusters.

  20. X-Ray Weak Broad-Line Quasars: Absorption or Intrinsic X-Ray Weakness

    NASA Technical Reports Server (NTRS)

    Risaliti, Guido; Mushotzky, Richard F. (Technical Monitor)

    2004-01-01

    XMM observations of X-ray weak quasars have been performed during 2003. The data for all but the last observation are now available (there has been a delay of several months on the initial schedule, due to high background flares which contaminated the observations: as a consequence, most of them had to be rescheduled). We have reduced and analyzed these data, and obtained interesting preliminary scientific results. Out of the eight sources, 4 are confirmed to be extrimely X-ray weak, in agreement with the results of previous Chandra observations. 3 sources are confirmed to be highly variable both in flux (by factors 20-50) and in spectral properties (dramatic changes in spectral index). For both these groups of objects, an article is in preparation. Preliminary results have been presented at an international workshop on AGN surveys in December 2003, in Cozumel (Mexico). In order to further understand the nature of these X-ray weak quasars, we submitted proposals for spectroscopy at optical and infrared telescopes. We obtained time at the TNG 4 meter telescope for near-IR observations, and at the Hobby-Eberly Telescope for optical high-resolution spectroscopy. These observations will be performed in early 2004, and will complement the XMM data, in order to understand whether the X-ray weakness of these sources is an intrinsic property or is due to absorption by circumnuclear material.

  1. The weighted priors approach for combining expert opinions in logistic regression experiments

    DOE PAGES

    Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.

    2017-04-24

    When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less

  2. The weighted priors approach for combining expert opinions in logistic regression experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.

    When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less

  3. Randomly organized lipids and marginally stable proteins: a coupling of weak interactions to optimize membrane signaling.

    PubMed

    Rice, Anne M; Mahling, Ryan; Fealey, Michael E; Rannikko, Anika; Dunleavy, Katie; Hendrickson, Troy; Lohese, K Jean; Kruggel, Spencer; Heiling, Hillary; Harren, Daniel; Sutton, R Bryan; Pastor, John; Hinderliter, Anne

    2014-09-01

    Eukaryotic lipids in a bilayer are dominated by weak cooperative interactions. These interactions impart highly dynamic and pliable properties to the membrane. C2 domain-containing proteins in the membrane also interact weakly and cooperatively giving rise to a high degree of conformational plasticity. We propose that this feature of weak energetics and plasticity shared by lipids and C2 domain-containing proteins enhance a cell's ability to transduce information across the membrane. We explored this hypothesis using information theory to assess the information storage capacity of model and mast cell membranes, as well as differential scanning calorimetry, carboxyfluorescein release assays, and tryptophan fluorescence to assess protein and membrane stability. The distribution of lipids in mast cell membranes encoded 5.6-5.8bits of information. More information resided in the acyl chains than the head groups and in the inner leaflet of the plasma membrane than the outer leaflet. When the lipid composition and information content of model membranes were varied, the associated C2 domains underwent large changes in stability and denaturation profile. The C2 domain-containing proteins are therefore acutely sensitive to the composition and information content of their associated lipids. Together, these findings suggest that the maximum flow of signaling information through the membrane and into the cell is optimized by the cooperation of near-random distributions of membrane lipids and proteins. This article is part of a Special Issue entitled: Interfacially Active Peptides and Proteins. Guest Editors: William C. Wimley and Kalina Hristova. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  5. Boosting Probabilistic Graphical Model Inference by Incorporating Prior Knowledge from Multiple Sources

    PubMed Central

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291

  6. Expectancy influences on attention to threat are only weak and transient: Behavioral and physiological evidence.

    PubMed

    Aue, Tatjana; Chauvigné, Léa A S; Bristle, Mirko; Okon-Singer, Hadas; Guex, Raphaël

    2016-12-01

    Can prior expectancies shape attention to threat? To answer this question, we manipulated the expectancies of spider phobics and nonfearful controls regarding the appearance of spider and bird targets in a visual search task. We observed robust evidence for expectancy influences on attention to birds, reflected in error rates, reaction times, pupil diameter, and heart rate (HR). We found no solid effect, however, of the same expectancies on attention to spiders; only HR revealed a weak and transient impact of prior expectancies on the orientation of attention to threat. Moreover, these asymmetric effects for spiders versus birds were observed in both phobics and controls. Our results are thus consistent with the notion of a threat detection mechanism that is only partially permeable to current expectancies, thereby increasing chances of survival in situations that are mistakenly perceived as safe. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. On weak lensing shape noise

    NASA Astrophysics Data System (ADS)

    Niemi, Sami-Matias; Kitching, Thomas D.; Cropper, Mark

    2015-12-01

    One of the most powerful techniques to study the dark sector of the Universe is weak gravitational lensing. In practice, to infer the reduced shear, weak lensing measures galaxy shapes, which are the consequence of both the intrinsic ellipticity of the sources and of the integrated gravitational lensing effect along the line of sight. Hence, a very large number of galaxies is required in order to average over their individual properties and to isolate the weak lensing cosmic shear signal. If this `shape noise' can be reduced, significant advances in the power of a weak lensing surveys can be expected. This paper describes a general method for extracting the probability distributions of parameters from catalogues of data using Voronoi cells, which has several applications, and has synergies with Bayesian hierarchical modelling approaches. This allows us to construct a probability distribution for the variance of the intrinsic ellipticity as a function of galaxy property using only photometric data, allowing a reduction of shape noise. As a proof of concept the method is applied to the CFHTLenS survey data. We use this approach to investigate trends of galaxy properties in the data and apply this to the case of weak lensing power spectra.

  8. Peripheral facial weakness (Bell's palsy).

    PubMed

    Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida

    2013-06-01

    Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.

  9. [A new method of distinguishing weak and overlapping signals of proton magnetic resonance spectroscopy].

    PubMed

    Jiang, Gang; Quan, Hong; Wang, Cheng; Gong, Qiyong

    2012-12-01

    In this paper, a new method of combining translation invariant (TI) and wavelet-threshold (WT) algorithm to distinguish weak and overlapping signals of proton magnetic resonance spectroscopy (1H-MRS) is presented. First, the 1H-MRS spectrum signal is transformed into wavelet domain and then its wavelet coefficients are obtained. Then, the TI method and WT method are applied to detect the weak signals overlapped by the strong ones. Through the analysis of the simulation data, we can see that both frequency and amplitude information of small-signals can be obtained accurately by the algorithm, and through the combination with the method of signal fitting, quantitative calculation of the area under weak signals peaks can be realized.

  10. Identical Quantum Particles and Weak Discernibility

    NASA Astrophysics Data System (ADS)

    Dieks, Dennis; Versteegh, Marijn A. M.

    2008-10-01

    Saunders has recently claimed that “identical quantum particles” with an anti-symmetric state (fermions) are weakly discernible objects, just like irreflexively related ordinary objects in situations with perfect symmetry (Black’s spheres, for example). Weakly discernible objects have all their qualitative properties in common but nevertheless differ from each other by virtue of (a generalized version of) Leibniz’s principle, since they stand in relations an entity cannot have to itself. This notion of weak discernibility has been criticized as question begging, but we defend and accept it for classical cases likes Black’s spheres. We argue, however, that the quantum mechanical case is different. Here the application of the notion of weak discernibility indeed is question begging and in conflict with standard interpretational ideas. We conclude that the introduction of the conceptual resource of weak discernibility does not change the interpretational status quo in quantum mechanics.

  11. A Calibrated Power Prior Approach to Borrow Information from Historical Data with Application to Biosimilar Clinical Trials.

    PubMed

    Pan, Haitao; Yuan, Ying; Xia, Jielai

    2017-11-01

    A biosimilar refers to a follow-on biologic intended to be approved for marketing based on biosimilarity to an existing patented biological product (i.e., the reference product). To develop a biosimilar product, it is essential to demonstrate biosimilarity between the follow-on biologic and the reference product, typically through two-arm randomization trials. We propose a Bayesian adaptive design for trials to evaluate biosimilar products. To take advantage of the abundant historical data on the efficacy of the reference product that is typically available at the time a biosimilar product is developed, we propose the calibrated power prior, which allows our design to adaptively borrow information from the historical data according to the congruence between the historical data and the new data collected from the current trial. We propose a new measure, the Bayesian biosimilarity index, to measure the similarity between the biosimilar and the reference product. During the trial, we evaluate the Bayesian biosimilarity index in a group sequential fashion based on the accumulating interim data, and stop the trial early once there is enough information to conclude or reject the similarity. Extensive simulation studies show that the proposed design has higher power than traditional designs. We applied the proposed design to a biosimilar trial for treating rheumatoid arthritis.

  12. Changing ideas about others’ intentions: updating prior expectations tunes activity in the human motor system

    PubMed Central

    Jacquet, Pierre O.; Roy, Alice C.; Chambon, Valérian; Borghi, Anna M.; Salemme, Roméo; Farnè, Alessandro; Reilly, Karen T.

    2016-01-01

    Predicting intentions from observing another agent’s behaviours is often thought to depend on motor resonance – i.e., the motor system’s response to a perceived movement by the activation of its stored motor counterpart, but observers might also rely on prior expectations, especially when actions take place in perceptually uncertain situations. Here we assessed motor resonance during an action prediction task using transcranial magnetic stimulation to probe corticospinal excitability (CSE) and report that experimentally-induced updates in observers’ prior expectations modulate CSE when predictions are made under situations of perceptual uncertainty. We show that prior expectations are updated on the basis of both biomechanical and probabilistic prior information and that the magnitude of the CSE modulation observed across participants is explained by the magnitude of change in their prior expectations. These findings provide the first evidence that when observers predict others’ intentions, motor resonance mechanisms adapt to changes in their prior expectations. We propose that this adaptive adjustment might reflect a regulatory control mechanism that shares some similarities with that observed during action selection. Such a mechanism could help arbitrate the competition between biomechanical and probabilistic prior information when appropriate for prediction. PMID:27243157

  13. Changing ideas about others' intentions: updating prior expectations tunes activity in the human motor system.

    PubMed

    Jacquet, Pierre O; Roy, Alice C; Chambon, Valérian; Borghi, Anna M; Salemme, Roméo; Farnè, Alessandro; Reilly, Karen T

    2016-05-31

    Predicting intentions from observing another agent's behaviours is often thought to depend on motor resonance - i.e., the motor system's response to a perceived movement by the activation of its stored motor counterpart, but observers might also rely on prior expectations, especially when actions take place in perceptually uncertain situations. Here we assessed motor resonance during an action prediction task using transcranial magnetic stimulation to probe corticospinal excitability (CSE) and report that experimentally-induced updates in observers' prior expectations modulate CSE when predictions are made under situations of perceptual uncertainty. We show that prior expectations are updated on the basis of both biomechanical and probabilistic prior information and that the magnitude of the CSE modulation observed across participants is explained by the magnitude of change in their prior expectations. These findings provide the first evidence that when observers predict others' intentions, motor resonance mechanisms adapt to changes in their prior expectations. We propose that this adaptive adjustment might reflect a regulatory control mechanism that shares some similarities with that observed during action selection. Such a mechanism could help arbitrate the competition between biomechanical and probabilistic prior information when appropriate for prediction.

  14. Intensive care unit-acquired weakness.

    PubMed

    Griffiths, Richard D; Hall, Jesse B

    2010-03-01

    Severe weakness is being recognized as a complication that impacts significantly on the pace and degree of recovery and return to former functional status of patients who survive the organ failures that mandate life-support therapies such as mechanical ventilation. Despite the apparent importance of this problem, much remains to be understood about its incidence, causes, prevention, and treatment. Review from literature and an expert round-table. The Brussels Round Table Conference in 2009 convened more than 20 experts in the fields of intensive care, neurology, and muscle physiology to review current understandings of intensive care unit-acquired weakness and to improve clinical outcome. Formal electrophysiological evaluation of patients with intensive care unit-acquired weakness can identify peripheral neuropathies, myopathies, and combinations of these disorders, although the correlation of these findings to weakness measurable at the bedside is not always precise. For routine clinical purposes, bedside assessment of neuromuscular function can be performed but is often confounded by complicating factors such as sedative and analgesic administration. Risk factors for development of intensive care unit-acquired weakness include bed rest itself, sepsis, and corticosteroid exposure. A strong association exists between weakness and long-term ventilator dependence; weakness is a major determinant of patient outcomes after surviving acute respiratory failure and may be present for months, or indefinitely, in the convalescence phase of critical illness. Although much has been learned about the physiology and cell and molecular biology of skeletal and diaphragm dysfunction under conditions of aging, exercise, disuse, and sepsis, the application of these understandings to the bedside requires more study in both bench models and patients. Although a trend toward greater immobilization and sedation of patients has characterized the past several decades of intensive care

  15. Analysis of factors related to arm weakness in patients with breast cancer-related lymphedema.

    PubMed

    Lee, Daegu; Hwang, Ji Hye; Chu, Inho; Chang, Hyun Ju; Shim, Young Hun; Kim, Jung Hyun

    2015-08-01

    The aim of this study was to evaluate the ratio of significant weakness in the affected arm of breast cancer-related lymphedema patients to their unaffected side. Another purpose was to identify factors related to arm weakness and physical function in patients with breast cancer-related lymphedema. Consecutive patients (n = 80) attended a single evaluation session following their outpatient lymphedema clinic visit. Possible independent factors (i.e., lymphedema, pain, psychological, educational, and behavioral) were evaluated. Handgrip strength was used to assess upper extremity muscle strength and the disabilities of arm, shoulder, and hand (DASH) questionnaire was used to assess upper extremity physical function. Multivariate logistic regression was performed using factors that had significant differences between the handgrip weakness and non-weakness groups. Out of the 80 patients with breast cancer-related lymphedema, 29 patients (36.3 %) had significant weakness in the affected arm. Weakness of the arm with lymphedema was not related to lymphedema itself, but was related to the fear of using the affected limb (odds ratio = 1.76, 95 % confidence interval = 1.30-2.37). Fears of using the affected limb and depression significantly contributed to the variance in DASH scores. Appropriate physical and psychological interventions, including providing accurate information and reassurance of physical activity safety, are necessary to prevent arm weakness and physical dysfunction in patients with breast cancer-related lymphedema.

  16. Bayesian road safety analysis: incorporation of past evidence and effect of hyper-prior choice.

    PubMed

    Miranda-Moreno, Luis F; Heydari, Shahram; Lord, Dominique; Fu, Liping

    2013-09-01

    This paper aims to address two related issues when applying hierarchical Bayesian models for road safety analysis, namely: (a) how to incorporate available information from previous studies or past experiences in the (hyper) prior distributions for model parameters and (b) what are the potential benefits of incorporating past evidence on the results of a road safety analysis when working with scarce accident data (i.e., when calibrating models with crash datasets characterized by a very low average number of accidents and a small number of sites). A simulation framework was developed to evaluate the performance of alternative hyper-priors including informative and non-informative Gamma, Pareto, as well as Uniform distributions. Based on this simulation framework, different data scenarios (i.e., number of observations and years of data) were defined and tested using crash data collected at 3-legged rural intersections in California and crash data collected for rural 4-lane highway segments in Texas. This study shows how the accuracy of model parameter estimates (inverse dispersion parameter) is considerably improved when incorporating past evidence, in particular when working with the small number of observations and crash data with low mean. The results also illustrates that when the sample size (more than 100 sites) and the number of years of crash data is relatively large, neither the incorporation of past experience nor the choice of the hyper-prior distribution may affect the final results of a traffic safety analysis. As a potential solution to the problem of low sample mean and small sample size, this paper suggests some practical guidance on how to incorporate past evidence into informative hyper-priors. By combining evidence from past studies and data available, the model parameter estimates can significantly be improved. The effect of prior choice seems to be less important on the hotspot identification. The results show the benefits of incorporating prior

  17. Estimating Bayesian Phylogenetic Information Content

    PubMed Central

    Lewis, Paul O.; Chen, Ming-Hui; Kuo, Lynn; Lewis, Louise A.; Fučíková, Karolina; Neupane, Suman; Wang, Yu-Bo; Shi, Daoyuan

    2016-01-01

    Measuring the phylogenetic information content of data has a long history in systematics. Here we explore a Bayesian approach to information content estimation. The entropy of the posterior distribution compared with the entropy of the prior distribution provides a natural way to measure information content. If the data have no information relevant to ranking tree topologies beyond the information supplied by the prior, the posterior and prior will be identical. Information in data discourages consideration of some hypotheses allowed by the prior, resulting in a posterior distribution that is more concentrated (has lower entropy) than the prior. We focus on measuring information about tree topology using marginal posterior distributions of tree topologies. We show that both the accuracy and the computational efficiency of topological information content estimation improve with use of the conditional clade distribution, which also allows topological information content to be partitioned by clade. We explore two important applications of our method: providing a compelling definition of saturation and detecting conflict among data partitions that can negatively affect analyses of concatenated data. [Bayesian; concatenation; conditional clade distribution; entropy; information; phylogenetics; saturation.] PMID:27155008

  18. Photodiode Preamplifier for Laser Ranging With Weak Signals

    NASA Technical Reports Server (NTRS)

    Abramovici, Alexander; Chapsky, Jacob

    2007-01-01

    An improved preamplifier circuit has been designed for processing the output of an avalanche photodiode (APD) that is used in a high-resolution laser ranging system to detect laser pulses returning from a target. The improved circuit stands in contrast to prior such circuits in which the APD output current pulses are made to pass, variously, through wide-band or narrow-band load networks before preamplification. A major disadvantage of the prior wide-band load networks is that they are highly susceptible to noise, which degrades timing resolution. A major disadvantage of the prior narrow-band load networks is that they make it difficult to sample the amplitudes of the narrow laser pulses ordinarily used in ranging. In the improved circuit, a load resistor is connected to the APD output and its value is chosen so that the time constant defined by this resistance and the APD capacitance is large, relative to the duration of a laser pulse. The APD capacitance becomes initially charged by the pulse of current generated by a return laser pulse, so that the rise time of the load-network output is comparable to the duration of the return pulse. Thus, the load-network output is characterized by a fast-rising leading edge, which is necessary for accurate pulse timing. On the other hand, the resistance-capacitance combination constitutes a lowpass filter, which helps to suppress noise. The long time constant causes the load network output pulse to have a long shallow-sloping trailing edge, which makes it easy to sample the amplitude of the return pulse. The output of the load network is fed to a low-noise, wide-band amplifier. The amplifier must be a wide-band one in order to preserve the sharp pulse rise for timing. The suppression of noise and the use of a low-noise amplifier enable the ranging system to detect relatively weak return pulses.

  19. Probing Primordial Non-Gaussianity with Weak-lensing Minkowski Functionals

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Yoshida, Naoki; Hamana, Takashi; Nishimichi, Takahiro

    2012-11-01

    We study the cosmological information contained in the Minkowski functionals (MFs) of weak gravitational lensing convergence maps. We show that the MFs provide strong constraints on the local-type primordial non-Gaussianity parameter f NL. We run a set of cosmological N-body simulations and perform ray-tracing simulations of weak lensing to generate 100 independent convergence maps of a 25 deg2 field of view for f NL = -100, 0 and 100. We perform a Fisher analysis to study the degeneracy among other cosmological parameters such as the dark energy equation of state parameter w and the fluctuation amplitude σ8. We use fully nonlinear covariance matrices evaluated from 1000 ray-tracing simulations. For upcoming wide-field observations such as those from the Subaru Hyper Suprime-Cam survey with a proposed survey area of 1500 deg2, the primordial non-Gaussianity can be constrained with a level of f NL ~ 80 and w ~ 0.036 by weak-lensing MFs. If simply scaled by the effective survey area, a 20,000 deg2 lensing survey using the Large Synoptic Survey Telescope will yield constraints of f NL ~ 25 and w ~ 0.013. We show that these constraints can be further improved by a tomographic method using source galaxies in multiple redshift bins.

  20. Array design considerations for exploitation of stable weakly dispersive modal pulses in the deep ocean

    NASA Astrophysics Data System (ADS)

    Udovydchenkov, Ilya A.

    2017-07-01

    Modal pulses are broadband contributions to an acoustic wave field with fixed mode number. Stable weakly dispersive modal pulses (SWDMPs) are special modal pulses that are characterized by weak dispersion and weak scattering-induced broadening and are thus suitable for communications applications. This paper investigates, using numerical simulations, receiver array requirements for recovering information carried by SWDMPs under various signal-to-noise ratio conditions without performing channel equalization. Two groups of weakly dispersive modal pulses are common in typical mid-latitude deep ocean environments: the lowest order modes (typically modes 1-3 at 75 Hz), and intermediate order modes whose waveguide invariant is near-zero (often around mode 20 at 75 Hz). Information loss is quantified by the bit error rate (BER) of a recovered binary phase-coded signal. With fixed receiver depths, low BERs (less than 1%) are achieved at ranges up to 400 km with three hydrophones for mode 1 with 90% probability and with 34 hydrophones for mode 20 with 80% probability. With optimal receiver depths, depending on propagation range, only a few, sometimes only two, hydrophones are often sufficient for low BERs, even with intermediate mode numbers. Full modal resolution is unnecessary to achieve low BERs. Thus, a flexible receiver array of autonomous vehicles can outperform a cabled array.

  1. Sodium in weak G-band giants

    NASA Technical Reports Server (NTRS)

    Drake, Jeremy J.; Lambert, David L.

    1994-01-01

    Sodium abundances have been determined for eight weak G-band giants whose atmospheres are greatly enriched with products of the CN-cycling H-burning reactions. Systematic errors are minimized by comparing the weak G-band giants to a sample of similar but normal giants. If, further, Ca is selected as a reference element, model atmosphere-related errors should largely be removed. For the weak-G-band stars (Na/Ca) = 0.16 +/- 0.01, which is just possibly greater than the result (Na/Ca) = 0.10 /- 0.03 from the normal giants. This result demonstrates that the atmospheres of the weak G-band giants are not seriously contaminated with products of ON cycling.

  2. Testing Small Variance Priors Using Prior-Posterior Predictive p Values.

    PubMed

    Hoijtink, Herbert; van de Schoot, Rens

    2017-04-03

    Muthén and Asparouhov (2012) propose to evaluate model fit in structural equation models based on approximate (using small variance priors) instead of exact equality of (combinations of) parameters to zero. This is an important development that adequately addresses Cohen's (1994) The Earth is Round (p < .05), which stresses that point null-hypotheses are so precise that small and irrelevant differences from the null-hypothesis may lead to their rejection. It is tempting to evaluate small variance priors using readily available approaches like the posterior predictive p value and the DIC. However, as will be shown, both are not suited for the evaluation of models based on small variance priors. In this article, a well behaving alternative, the prior-posterior predictive p value, will be introduced. It will be shown that it is consistent, the distributions under the null and alternative hypotheses will be elaborated, and it will be applied to testing whether the difference between 2 means and the size of a correlation are relevantly different from zero. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. [Validation of patients' knowledge after informed consent prior to coronary angiography].

    PubMed

    Eran, A; Erdmann, E; Yüksel, D; Dahlem, K M; Er, F

    2011-11-01

    The informed consent of the patient is required before any medical intervention can be done. The impact of the provided information on the subsequent knowledge of the patient is regularly questioned. In the present investigation we aimed to determine the knowledge of the patients about invasive coronary angiography (CA) after they had been optimally vs. standard vs. not at all informed. 300 consecutive patients who were admitted for planned CA were included. Of these, 150 in-patients were informed by especially trained physicians one day before CA and 50 out-patients were informed by their general practitioner or cardiologist several days before admission. 100 in-patients were included before they were informed. In a standardized interview the predefined knowledge of the patients was assessed by an independent physician before CA in previously informed patients and after hospital admission in non-informed patients. The differences in knowledge between informed in- and out-patients were low. Especially their knowledge about potential complications was not different. Generally, patients could remember less serious complications better than life-threatening ones. Two previously informed patients (1 %) affirmed that they were not informed. The knowledge of non-informed patients was much lower than the knowledge of patients who had been informed. The knowledge and remembrance of patients after having detailed information about medical interventions is limited. Optimization of the informative interview did not really improve this knowledge. In contrast to non-informed patients the provided information did, however, increase the knowledge. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Interferometric weak measurement of photon polarization

    NASA Astrophysics Data System (ADS)

    Iinuma, Masataka; Suzuki, Yutaro; Taguchi, Gen; Kadoya, Yutaka; Hofmann, Holger F.

    2011-10-01

    We realize a minimum back-action quantum non-demolition measurement of variable strength on photon polarization in the diagonal(PM) basis by two-mode path interference. This method uses the phase difference between the positive (P) and negative (M) superpositions in the interference between the horizontal (H) and vertical (V) polarized paths in the input beam. Although the interference can not occur when the H and V polarizations are distinguishable, a well-controlled amount of interference is induced by erasing the H and V information using a coherent rotation of polarization toward a common diagonal polarization. This method is particularly suitable for the realization of weak measurements, where the control of the back-action is essential.

  5. Cosmology with photometric weak lensing surveys: Constraints with redshift tomography of convergence peaks and moments

    NASA Astrophysics Data System (ADS)

    Petri, Andrea; May, Morgan; Haiman, Zoltán

    2016-09-01

    Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w . When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ωm,w ,σ8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. We find that redshift tomography with the power spectrum reduces the area of the 1 σ confidence interval in (Ωm,w ) space by a factor of 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ωm,w ) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. We find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.

  6. Information jet: Handling noisy big data from weakly disconnected network

    NASA Astrophysics Data System (ADS)

    Aurongzeb, Deeder

    Sudden aggregation (information jet) of large amount of data is ubiquitous around connected social networks, driven by sudden interacting and non-interacting events, network security threat attacks, online sales channel etc. Clustering of information jet based on time series analysis and graph theory is not new but little work is done to connect them with particle jet statistics. We show pre-clustering based on context can element soft network or network of information which is critical to minimize time to calculate results from noisy big data. We show difference between, stochastic gradient boosting and time series-graph clustering. For disconnected higher dimensional information jet, we use Kallenberg representation theorem (Kallenberg, 2005, arXiv:1401.1137) to identify and eliminate jet similarities from dense or sparse graph.

  7. Quantum to Classical Transitions via Weak Measurements and Post-Selection

    NASA Astrophysics Data System (ADS)

    Cohen, Eliahu; Aharonov, Yakir

    Alongside its immense empirical success, the quantum mechanical account of physical systems imposes a myriad of divergences from our thoroughly ingrained classical ways of thinking. These divergences, while striking, would have been acceptable if only a continuous transition to the classical domain was at hand. Strangely, this is not quite the case. The difficulties involved in reconciling the quantum with the classical have given rise to different interpretations, each with its own shortcomings. Traditionally, the two domains are sewed together by invoking an ad hoc theory of measurement, which has been incorporated in the axiomatic foundations of quantum theory. This work will incorporate a few related tools for addressing the above conceptual difficulties: deterministic operators, weak measurements, and post-selection. Weak Measurement, based on a very weak von Neumann coupling, is a unique kind of quantum measurement with numerous theoretical and practical applications. In contrast to other measurement techniques, it allows to gather a small amount of information regarding the quantum system, with only a negligible probability of collapsing it onto an eigenstate of the measured observable. A single weak measurement yieldsan almost random outcome, but when performed repeatedly over a large ensemble, the averaged outcome becomes increasingly robust and accurate. Importantly, a long sequence of weak measurements can be thought of as a single projective measurement. We claim in this work that classical variables appearing in the o-world, such as center of mass, moment of inertia, pressure, and average forces, result from a multitude of quantum weak measurements performed in the micro-world. Here again, the quantum outcomes are highly uncertain, but the law of large numbers obliges their convergence to the definite quantities we know from our everyday lives. By augmenting this description with a final boundary condition and employing the notion of "classical

  8. First result from Q weak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, David S.; Battaglieri, M.; D'Angelo, A.

    2014-01-01

    Initial results are presented from the recently-completed Q{sub weak} experiment at Jefferson Lab. The goal is a precise measurement of the proton's weak charge Q{sub w}{sup p}, to yield a test of the standard model and to search for evidence of new physics. The weak charge is extracted from the parity-violating asymmetry in elastic {rvec e}p scattering at low momentum transfer, Q{sup 2} = 0.025GeV{sup 2}. A 180 {micro} A longitudinally-polarized 1.16 GeV electron beam was scattered from a 35 cm long liquid hydrogen at small angles, 6 {degrees} < {theta} < 12 {degrees} Scattered electrons were analyzed in amore » toroidal magnetic field and detected using an array of eight Cerenkov detectors arranged symmetrically about the beam axis. The initial result, from 4% of the complete data set, is Q{sub W}{sup p} = 0.064 ± 0.012, in excellent agreement with the standard model expectation. Full analysis of the data is expected to yield a value for the weak charge to about 5% precision.« less

  9. Geometric phase topology in weak measurement

    NASA Astrophysics Data System (ADS)

    Samlan, C. T.; Viswanathan, Nirmal K.

    2017-12-01

    The geometric phase visualization proposed by Bhandari (R Bhandari 1997 Phys. Rep. 281 1-64) in the ellipticity-ellipse orientation basis of the polarization ellipse of light is implemented to understand the geometric aspects of weak measurement. The weak interaction of a pre-selected state, acheived via spin-Hall effect of light (SHEL), results in a spread in the polarization ellipticity (η) or ellipse orientation (χ) depending on the resulting spatial or angular shift, respectively. The post-selection leads to the projection of the η spread in the complementary χ basis results in the appearance of a geometric phase with helical phase topology in the η - χ parameter space. By representing the weak measurement on the Poincaré sphere and using Jones calculus, the complex weak value and the geometric phase topology are obtained. This deeper understanding of the weak measurement process enabled us to explore the techniques’ capabilities maximally, as demonstrated via SHEL in two examples—external reflection at glass-air interface and transmission through a tilted half-wave plate.

  10. Integrative Bayesian variable selection with gene-based informative priors for genome-wide association studies.

    PubMed

    Zhang, Xiaoshuai; Xue, Fuzhong; Liu, Hong; Zhu, Dianwen; Peng, Bin; Wiemels, Joseph L; Yang, Xiaowei

    2014-12-10

    Genome-wide Association Studies (GWAS) are typically designed to identify phenotype-associated single nucleotide polymorphisms (SNPs) individually using univariate analysis methods. Though providing valuable insights into genetic risks of common diseases, the genetic variants identified by GWAS generally account for only a small proportion of the total heritability for complex diseases. To solve this "missing heritability" problem, we implemented a strategy called integrative Bayesian Variable Selection (iBVS), which is based on a hierarchical model that incorporates an informative prior by considering the gene interrelationship as a network. It was applied here to both simulated and real data sets. Simulation studies indicated that the iBVS method was advantageous in its performance with highest AUC in both variable selection and outcome prediction, when compared to Stepwise and LASSO based strategies. In an analysis of a leprosy case-control study, iBVS selected 94 SNPs as predictors, while LASSO selected 100 SNPs. The Stepwise regression yielded a more parsimonious model with only 3 SNPs. The prediction results demonstrated that the iBVS method had comparable performance with that of LASSO, but better than Stepwise strategies. The proposed iBVS strategy is a novel and valid method for Genome-wide Association Studies, with the additional advantage in that it produces more interpretable posterior probabilities for each variable unlike LASSO and other penalized regression methods.

  11. Abdominal multi-organ segmentation from CT images using conditional shape–location and unsupervised intensity priors

    PubMed Central

    Linguraru, Marius George; Hori, Masatoshi; Summers, Ronald M; Tomiyama, Noriyuki

    2015-01-01

    This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape–location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape–location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively. PMID:26277022

  12. Abdominal multi-organ segmentation from CT images using conditional shape-location and unsupervised intensity priors.

    PubMed

    Okada, Toshiyuki; Linguraru, Marius George; Hori, Masatoshi; Summers, Ronald M; Tomiyama, Noriyuki; Sato, Yoshinobu

    2015-12-01

    This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape-location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape-location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively. Copyright © 2015

  13. A shape prior-based MRF model for 3D masseter muscle segmentation

    NASA Astrophysics Data System (ADS)

    Majeed, Tahir; Fundana, Ketut; Lüthi, Marcel; Beinemann, Jörg; Cattin, Philippe

    2012-02-01

    Medical image segmentation is generally an ill-posed problem that can only be solved by incorporating prior knowledge. The ambiguities arise due to the presence of noise, weak edges, imaging artifacts, inhomogeneous interior and adjacent anatomical structures having similar intensity profile as the target structure. In this paper we propose a novel approach to segment the masseter muscle using the graph-cut incorporating additional 3D shape priors in CT datasets, which is robust to noise; artifacts; and shape deformations. The main contribution of this paper is in translating the 3D shape knowledge into both unary and pairwise potentials of the Markov Random Field (MRF). The segmentation task is casted as a Maximum-A-Posteriori (MAP) estimation of the MRF. Graph-cut is then used to obtain the global minimum which results in the segmentation of the masseter muscle. The method is tested on 21 CT datasets of the masseter muscle, which are noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. the very common dental fillings and dental implants. We show that the proposed technique produces clinically acceptable results to the challenging problem of muscle segmentation, and further provide a quantitative and qualitative comparison with other methods. We statistically show that adding additional shape prior into both unary and pairwise potentials can increase the robustness of the proposed method in noisy datasets.

  14. What are they up to? The role of sensory evidence and prior knowledge in action understanding.

    PubMed

    Chambon, Valerian; Domenech, Philippe; Pacherie, Elisabeth; Koechlin, Etienne; Baraduc, Pierre; Farrer, Chlöé

    2011-02-18

    Explaining or predicting the behaviour of our conspecifics requires the ability to infer the intentions that motivate it. Such inferences are assumed to rely on two types of information: (1) the sensory information conveyed by movement kinematics and (2) the observer's prior expectations--acquired from past experience or derived from prior knowledge. However, the respective contribution of these two sources of information is still controversial. This controversy stems in part from the fact that "intention" is an umbrella term that may embrace various sub-types each being assigned different scopes and targets. We hypothesized that variations in the scope and target of intentions may account for variations in the contribution of visual kinematics and prior knowledge to the intention inference process. To test this hypothesis, we conducted four behavioural experiments in which participants were instructed to identify different types of intention: basic intentions (i.e. simple goal of a motor act), superordinate intentions (i.e. general goal of a sequence of motor acts), or social intentions (i.e. intentions accomplished in a context of reciprocal interaction). For each of the above-mentioned intentions, we varied (1) the amount of visual information available from the action scene and (2) participant's prior expectations concerning the intention that was more likely to be accomplished. First, we showed that intentional judgments depend on a consistent interaction between visual information and participant's prior expectations. Moreover, we demonstrated that this interaction varied according to the type of intention to be inferred, with participant's priors rather than perceptual evidence exerting a greater effect on the inference of social and superordinate intentions. The results are discussed by appealing to the specific properties of each type of intention considered and further interpreted in the light of a hierarchical model of action representation.

  15. What Are They Up To? The Role of Sensory Evidence and Prior Knowledge in Action Understanding

    PubMed Central

    Chambon, Valerian; Domenech, Philippe; Pacherie, Elisabeth; Koechlin, Etienne; Baraduc, Pierre; Farrer, Chlöé

    2011-01-01

    Explaining or predicting the behaviour of our conspecifics requires the ability to infer the intentions that motivate it. Such inferences are assumed to rely on two types of information: (1) the sensory information conveyed by movement kinematics and (2) the observer's prior expectations – acquired from past experience or derived from prior knowledge. However, the respective contribution of these two sources of information is still controversial. This controversy stems in part from the fact that “intention” is an umbrella term that may embrace various sub-types each being assigned different scopes and targets. We hypothesized that variations in the scope and target of intentions may account for variations in the contribution of visual kinematics and prior knowledge to the intention inference process. To test this hypothesis, we conducted four behavioural experiments in which participants were instructed to identify different types of intention: basic intentions (i.e. simple goal of a motor act), superordinate intentions (i.e. general goal of a sequence of motor acts), or social intentions (i.e. intentions accomplished in a context of reciprocal interaction). For each of the above-mentioned intentions, we varied (1) the amount of visual information available from the action scene and (2) participant's prior expectations concerning the intention that was more likely to be accomplished. First, we showed that intentional judgments depend on a consistent interaction between visual information and participant's prior expectations. Moreover, we demonstrated that this interaction varied according to the type of intention to be inferred, with participant's priors rather than perceptual evidence exerting a greater effect on the inference of social and superordinate intentions. The results are discussed by appealing to the specific properties of each type of intention considered and further interpreted in the light of a hierarchical model of action representation. PMID

  16. Weak Turbulence in Protoplanetary Disks as Revealed by ALMA

    NASA Astrophysics Data System (ADS)

    Flaherty, Kevin; Hughes, A. Meredith; Simon, Jacob; Andrews, Sean; Bai, Xue-Ning; Wilner, David

    2018-01-01

    Gas kinematics are an important part of planet formation, influencing processes ranging from the growth of sub-micron grains to the migration of gas giant planets. Dynamical behavior can be traced with both synoptic observations of the mid-infrared excess, sensitive to the inner disk, and spatially resolved radio observations of gas emission, sensitive to the outer disk. I report on our ongoing efforts to constrain turbulence using ALMA observations of CO emission from protoplanetary disks. Building on our upper limit around HD 163296 (<0.05cs), we find evidence for weak turbulence around TW Hya (<0.08cs) indicating that weak non-thermal motion is not unique to HD 163296. I will also discuss observations of CO/13CO/C18O from around V4046 Sgr, DM Tau, and MWC 480 that will help to further expand the turbulence sample, as well as inform our understanding of CO photo-chemistry in the outer edges of these disks.

  17. Utilizing Weak Indicators to Detect Anomalous Behaviors in Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egid, Adin

    We consider the use of a novel weak in- dicator alongside more commonly used weak indicators to help detect anomalous behavior in a large computer network. The data of the network which we are studying in this research paper concerns remote log-in information (Virtual Private Network, or VPN sessions) from the internal network of Los Alamos National Laboratory (LANL). The novel indicator we are utilizing is some- thing which, while novel in its application to data science/cyber security research, is a concept borrowed from the business world. The Her ndahl-Hirschman Index (HHI) is a computationally trivial index which provides amore » useful heuristic for regulatory agencies to ascertain the relative competitiveness of a particular industry. Using this index as a lagging indicator in the monthly format we have studied could help to detect anomalous behavior by a particular or small set of users on the network.« less

  18. The impact of the rate prior on Bayesian estimation of divergence times with multiple Loci.

    PubMed

    Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng

    2014-07-01

    Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  19. X-ray Weak Broad-line Qquasars: Absorption or Intrinsic X-ray Weakness

    NASA Technical Reports Server (NTRS)

    Mushotzky, Richard (Technical Monitor); Risaliti, Guida

    2005-01-01

    XMM observations of X-ray weak quasars have been performed during 2003 and 2004. The data for all the observations have become available in 2004 (there has been a delay of several months on the initial schedule, due to high background flares which contaminated the observations: as a consequence, most of them had to be rescheduled). We have reduced and analyzed all the data, and obtained interesting scientific results. Out of the eight sources, 4 are confirmed to be extremely X-ray weak, in agreement with the results of previous Chandra observations. 3 sources are confined to be highly variable both in flux (by factor 20-50) and in spectral properties (dramatic changes in spectral index). For both these groups of objects we are completing a publication: 1) For the X-ray weak sources, a paper is submitted with a complete analysis of the X-ray spectra both from Chandra and XMM-Newton, and a comparison with optical and near-IR photometry obtained from all-sky surveys. Possible models for the unusual spectral energy distribution of these sources are also presented. 2) For the variable sources, a paper is being finalized where the X-ray spectra obtained with XMM-Newton are compared with previous X-ray observations and with observations at other wavelengths. It is shown that these sources are high luminosity and extreme cases of the highly variable class of narrow-line Seyfert Is. In order to further understand the nature of these X-ray weak quasars, we submitted proposals for spectroscopy at optical and infrared telescopes. We obtained time at the TNG 4 meter telescope for near-IR observations and at the Hobby-Eberly Telescope for optical high-resolution spectroscopy. These observations have been performed in early 2004. They will complement the XMM data and will lead to understanding of whether the X-ray weakness of these sources is an intrinsic property or is due to absorption by circum-nuclear material. The infrared spectra of the variable sources have been already

  20. Cross-cultural development and psychometric evaluation of a measure to assess fear of childbirth prior to pregnancy.

    PubMed

    Stoll, Kathrin; Hauck, Yvonne; Downe, Soo; Edmonds, Joyce; Gross, Mechthild M; Malott, Anne; McNiven, Patricia; Swift, Emma; Thomson, Gillian; Hall, Wendy A

    2016-06-01

    Assessment of childbirth fear, in advance of pregnancy, and early identification of modifiable factors contributing to fear can inform public health initiatives and/or school-based educational programming for the next generation of maternity care consumers. We developed and evaluated a short fear of birth scale that incorporates the most common dimensions of fear reported by men and women prior to pregnancy, fear of: labour pain, being out of control and unable to cope with labour and birth, complications, and irreversible physical damage. University students in six countries (Australia, Canada, England, Germany, Iceland, and the United States, n = 2240) participated in an online survey to assess their fears and attitudes about birth. We report internal consistency reliability, corrected-item-to-total correlations, factor loadings and convergent and discriminant validity of the new scale. The Childbirth Fear - Prior to Pregnancy (CFPP) scale showed high internal consistency across samples (α > 0.86). All corrected-item-to total correlations exceeded 0.45, supporting the uni-dimensionality of the scale. Construct validity of the CFPP was supported by a high correlation between the new scale and a two-item visual analogue scale that measures fear of birth (r > 0.6 across samples). Weak correlations of the CFPP with scores on measures that assess related psychological states (anxiety, depression and stress) support the discriminant validity of the scale. The CFPP is a short, reliable and valid measure of childbirth fear among young women and men in six countries who plan to have children. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Assessment of Integrated Learning: Suggested Application of Concept Mapping to Prior Learning Assessment Practices

    ERIC Educational Resources Information Center

    Popova-Gonci, Viktoria; Lamb, Monica C.

    2012-01-01

    Prior learning assessment (PLA) students enter academia with different types of concepts--some of them have been formally accepted and labeled by academia and others are informally formulated by students via independent and/or experiential learning. The critical goal of PLA practices is to assess an intricate combination of prior learning…

  2. Design of a model observer to evaluate calcification detectability in breast tomosynthesis and application to smoothing prior optimization.

    PubMed

    Michielsen, Koen; Nuyts, Johan; Cockmartin, Lesley; Marshall, Nicholas; Bosmans, Hilde

    2016-12-01

    In this work, the authors design and validate a model observer that can detect groups of microcalcifications in a four-alternative forced choice experiment and use it to optimize a smoothing prior for detectability of microcalcifications. A channelized Hotelling observer (CHO) with eight Laguerre-Gauss channels was designed to detect groups of five microcalcifications in a background of acrylic spheres by adding the CHO log-likelihood ratios calculated at the expected locations of the five calcifications. This model observer is then applied to optimize the detectability of the microcalcifications as a function of the smoothing prior. The authors examine the quadratic and total variation (TV) priors, and a combination of both. A selection of these reconstructions was then evaluated by human observers to validate the correct working of the model observer. The authors found a clear maximum for the detectability of microcalcification when using the total variation prior with weight β TV = 35. Detectability only varied over a small range for the quadratic and combined quadratic-TV priors when weight β Q of the quadratic prior was changed by two orders of magnitude. Spearman correlation with human observers was good except for the highest value of β for the quadratic and TV priors. Excluding those, the authors found ρ = 0.93 when comparing detection fractions, and ρ = 0.86 for the fitted detection threshold diameter. The authors successfully designed a model observer that was able to predict human performance over a large range of settings of the smoothing prior, except for the highest values of β which were outside the useful range for good image quality. Since detectability only depends weakly on the strength of the combined prior, it is not possible to pick an optimal smoothness based only on this criterion. On the other hand, such choice can now be made based on other criteria without worrying about calcification detectability.

  3. Rational hypocrisy: a Bayesian analysis based on informal argumentation and slippery slopes.

    PubMed

    Rai, Tage S; Holyoak, Keith J

    2014-01-01

    Moral hypocrisy is typically viewed as an ethical accusation: Someone is applying different moral standards to essentially identical cases, dishonestly claiming that one action is acceptable while otherwise equivalent actions are not. We suggest that in some instances the apparent logical inconsistency stems from different evaluations of a weak argument, rather than dishonesty per se. Extending Corner, Hahn, and Oaksford's (2006) analysis of slippery slope arguments, we develop a Bayesian framework in which accusations of hypocrisy depend on inferences of shared category membership between proposed actions and previous standards, based on prior probabilities that inform the strength of competing hypotheses. Across three experiments, we demonstrate that inferences of hypocrisy increase as perceptions of the likelihood of shared category membership between precedent cases and current cases increase, that these inferences follow established principles of category induction, and that the presence of self-serving motives increases inferences of hypocrisy independent of changes in the actions themselves. Taken together, these results demonstrate that Bayesian analyses of weak arguments may have implications for assessing moral reasoning. © 2014 Cognitive Science Society, Inc.

  4. Hysteresis as an Implicit Prior in Tactile Spatial Decision Making

    PubMed Central

    Thiel, Sabrina D.; Bitzer, Sebastian; Nierhaus, Till; Kalberlah, Christian; Preusser, Sven; Neumann, Jane; Nikulin, Vadim V.; van der Meer, Elke; Villringer, Arno; Pleger, Burkhard

    2014-01-01

    Perceptual decisions not only depend on the incoming information from sensory systems but constitute a combination of current sensory evidence and internally accumulated information from past encounters. Although recent evidence emphasizes the fundamental role of prior knowledge for perceptual decision making, only few studies have quantified the relevance of such priors on perceptual decisions and examined their interplay with other decision-relevant factors, such as the stimulus properties. In the present study we asked whether hysteresis, describing the stability of a percept despite a change in stimulus property and known to occur at perceptual thresholds, also acts as a form of an implicit prior in tactile spatial decision making, supporting the stability of a decision across successively presented random stimuli (i.e., decision hysteresis). We applied a variant of the classical 2-point discrimination task and found that hysteresis influenced perceptual decision making: Participants were more likely to decide ‘same’ rather than ‘different’ on successively presented pin distances. In a direct comparison between the influence of applied pin distances (explicit stimulus property) and hysteresis, we found that on average, stimulus property explained significantly more variance of participants’ decisions than hysteresis. However, when focusing on pin distances at threshold, we found a trend for hysteresis to explain more variance. Furthermore, the less variance was explained by the pin distance on a given decision, the more variance was explained by hysteresis, and vice versa. Our findings suggest that hysteresis acts as an implicit prior in tactile spatial decision making that becomes increasingly important when explicit stimulus properties provide decreasing evidence. PMID:24587045

  5. Mobile devices and weak ties: a study of vision impairments and workplace access in Bangalore.

    PubMed

    Pal, Joyojeet; Lakshmanan, Meera

    2015-07-01

    To explore ways in which social and economic interactions are changed by access to mobile telephony. This is a mixed-methods study of mobile phone use among 52 urban professionals with vision impairments in Bangalore, India. Interviews and survey results indicated that mobile devices, specifically those with adaptive technology software, play a vital role as multi-purpose devices that enable people with disabilities to navigate economically and socially in an environment where accessibility remains a significant challenge. We found that mobile devices play a central role in enabling and sustaining weak ties, but also that these weak ties have important gender-specific implications. We found that women have less access to weak ties than men, which impacts women's access to assistive technology (AT). This has potential implications for women's sense of safety and independence, both of which are strongly related to AT access. Implications for Rehabilitation Adaptive technologies increase individuals' ability to keep in contact with casual connections or weak ties through phone calls or social media. Men tend to have stronger access to weak ties than women in India due to cultural impediments to independent access to public spaces. Weak ties are an important source of assistive technology (AT) due to the high rate of resale of used AT, typically through informal networks.

  6. Preparing learners with partly incorrect intuitive prior knowledge for learning

    PubMed Central

    Ohst, Andrea; Fondu, Béatrice M. E.; Glogger, Inga; Nückles, Matthias; Renkl, Alexander

    2014-01-01

    Learners sometimes have incoherent and fragmented intuitive prior knowledge that is (partly) “incompatible” with the to-be-learned contents. Such knowledge in pieces can cause conceptual disorientation and cognitive overload while learning. We hypothesized that a pre-training intervention providing a generalized schema as a structuring framework for such knowledge in pieces would support (re)organizing-processes of prior knowledge and thus reduce unnecessary cognitive load during subsequent learning. Fifty-six student teachers participated in the experiment. A framework group underwent a pre-training intervention providing a generalized, categorical schema for categorizing primary learning strategies and related but different strategies as a cognitive framework for (re-)organizing their prior knowledge. Our control group received comparable factual information but no framework. Afterwards, all participants learned about primary learning strategies. The framework group claimed to possess higher levels of interest and self-efficacy, achieved higher learning outcomes, and learned more efficiently. Hence, providing a categorical framework can help overcome the barrier of incorrect prior knowledge in pieces. PMID:25071638

  7. Weak signal amplification and detection by higher-order sensory neurons.

    PubMed

    Jung, Sarah N; Longtin, Andre; Maler, Leonard

    2016-04-01

    Sensory systems must extract behaviorally relevant information and therefore often exhibit a very high sensitivity. How the nervous system reaches such high sensitivity levels is an outstanding question in neuroscience. Weakly electric fish (Apteronotus leptorhynchus/albifrons) are an excellent model system to address this question because detailed background knowledge is available regarding their behavioral performance and its underlying neuronal substrate. Apteronotus use their electrosense to detect prey objects. Therefore, they must be able to detect electrical signals as low as 1 μV while using a sensory integration time of <200 ms. How these very weak signals are extracted and amplified by the nervous system is not yet understood. We studied the responses of cells in the early sensory processing areas, namely, the electroreceptor afferents (EAs) and pyramidal cells (PCs) of the electrosensory lobe (ELL), the first-order electrosensory processing area. In agreement with previous work we found that EAs cannot encode very weak signals with a spike count code. However, PCs can encode prey mimic signals by their firing rate, revealing a huge signal amplification between EAs and PCs and also suggesting differences in their stimulus encoding properties. Using a simple leaky integrate-and-fire (LIF) model we predict that the target neurons of PCs in the midbrain torus semicircularis (TS) are able to detect very weak signals. In particular, TS neurons could do so by assuming biologically plausible convergence rates as well as very simple decoding strategies such as temporal integration, threshold crossing, and combining the inputs of PCs. Copyright © 2016 the American Physiological Society.

  8. Weak signal amplification and detection by higher-order sensory neurons

    PubMed Central

    Longtin, Andre; Maler, Leonard

    2016-01-01

    Sensory systems must extract behaviorally relevant information and therefore often exhibit a very high sensitivity. How the nervous system reaches such high sensitivity levels is an outstanding question in neuroscience. Weakly electric fish (Apteronotus leptorhynchus/albifrons) are an excellent model system to address this question because detailed background knowledge is available regarding their behavioral performance and its underlying neuronal substrate. Apteronotus use their electrosense to detect prey objects. Therefore, they must be able to detect electrical signals as low as 1 μV while using a sensory integration time of <200 ms. How these very weak signals are extracted and amplified by the nervous system is not yet understood. We studied the responses of cells in the early sensory processing areas, namely, the electroreceptor afferents (EAs) and pyramidal cells (PCs) of the electrosensory lobe (ELL), the first-order electrosensory processing area. In agreement with previous work we found that EAs cannot encode very weak signals with a spike count code. However, PCs can encode prey mimic signals by their firing rate, revealing a huge signal amplification between EAs and PCs and also suggesting differences in their stimulus encoding properties. Using a simple leaky integrate-and-fire (LIF) model we predict that the target neurons of PCs in the midbrain torus semicircularis (TS) are able to detect very weak signals. In particular, TS neurons could do so by assuming biologically plausible convergence rates as well as very simple decoding strategies such as temporal integration, threshold crossing, and combining the inputs of PCs. PMID:26843601

  9. Unexpected weak interaction

    NASA Astrophysics Data System (ADS)

    2013-08-01

    Stéphane Coen and Miro Erkintalo from the University of Auckland in New Zealand talk to Nature Photonics about their surprising findings regarding a weak long-range interaction they serendipitously stumbled upon while researching temporal cavity solitons.

  10. Effect of Masked Regions on Weak-lensing Statistics

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Yoshida, Naoki; Hamana, Takashi

    2013-09-01

    Sky masking is unavoidable in wide-field weak-lensing observations. We study how masks affect the measurement of statistics of matter distribution probed by weak gravitational lensing. We first use 1000 cosmological ray-tracing simulations to examine in detail the impact of masked regions on the weak-lensing Minkowski Functionals (MFs). We consider actual sky masks used for a Subaru Suprime-Cam imaging survey. The masks increase the variance of the convergence field and the expected values of the MFs are biased. The bias then compromises the non-Gaussian signals induced by the gravitational growth of structure. We then explore how masks affect cosmological parameter estimation. We calculate the cumulative signal-to-noise ratio (S/N) for masked maps to study the information content of lensing MFs. We show that the degradation of S/N for masked maps is mainly determined by the effective survey area. We also perform simple χ2 analysis to show the impact of lensing MF bias due to masked regions. Finally, we compare ray-tracing simulations with data from a Subaru 2 deg2 survey in order to address if the observed lensing MFs are consistent with those of the standard cosmology. The resulting χ2/n dof = 29.6/30 for three combined MFs, obtained with the mask effects taken into account, suggests that the observational data are indeed consistent with the standard ΛCDM model. We conclude that the lensing MFs are a powerful probe of cosmology only if mask effects are correctly taken into account.

  11. EFFECT OF MASKED REGIONS ON WEAK-LENSING STATISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirasaki, Masato; Yoshida, Naoki; Hamana, Takashi, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp

    2013-09-10

    Sky masking is unavoidable in wide-field weak-lensing observations. We study how masks affect the measurement of statistics of matter distribution probed by weak gravitational lensing. We first use 1000 cosmological ray-tracing simulations to examine in detail the impact of masked regions on the weak-lensing Minkowski Functionals (MFs). We consider actual sky masks used for a Subaru Suprime-Cam imaging survey. The masks increase the variance of the convergence field and the expected values of the MFs are biased. The bias then compromises the non-Gaussian signals induced by the gravitational growth of structure. We then explore how masks affect cosmological parameter estimation.more » We calculate the cumulative signal-to-noise ratio (S/N) for masked maps to study the information content of lensing MFs. We show that the degradation of S/N for masked maps is mainly determined by the effective survey area. We also perform simple {chi}{sup 2} analysis to show the impact of lensing MF bias due to masked regions. Finally, we compare ray-tracing simulations with data from a Subaru 2 deg{sup 2} survey in order to address if the observed lensing MFs are consistent with those of the standard cosmology. The resulting {chi}{sup 2}/n{sub dof} = 29.6/30 for three combined MFs, obtained with the mask effects taken into account, suggests that the observational data are indeed consistent with the standard {Lambda}CDM model. We conclude that the lensing MFs are a powerful probe of cosmology only if mask effects are correctly taken into account.« less

  12. Wigner's quantum phase-space current in weakly-anharmonic weakly-excited two-state systems

    NASA Astrophysics Data System (ADS)

    Kakofengitis, Dimitris; Steuernagel, Ole

    2017-09-01

    There are no phase-space trajectories for anharmonic quantum systems, but Wigner's phase-space representation of quantum mechanics features Wigner current J . This current reveals fine details of quantum dynamics —finer than is ordinarily thought accessible according to quantum folklore invoking Heisenberg's uncertainty principle. Here, we focus on the simplest, most intuitive, and analytically accessible aspects of J. We investigate features of J for bound states of time-reversible, weakly-anharmonic one-dimensional quantum-mechanical systems which are weakly-excited. We establish that weakly-anharmonic potentials can be grouped into three distinct classes: hard, soft, and odd potentials. We stress connections between each other and the harmonic case. We show that their Wigner current fieldline patterns can be characterised by J's discrete stagnation points, how these arise and how a quantum system's dynamics is constrained by the stagnation points' topological charge conservation. We additionally show that quantum dynamics in phase space, in the case of vanishing Planck constant ℏ or vanishing anharmonicity, does not pointwise converge to classical dynamics.

  13. Rehabilitation in practice: management of lower motor neuron weakness.

    PubMed

    Ramdharry, Gita M

    2010-05-01

    This series of articles for rehabilitation in practice aims to cover a knowledge element of the rehabilitation medicine curriculum. Nevertheless they are intended to be of interest to a multidisciplinary audience. The competency addressed in this article is 'The trainee consistently demonstrates a knowledge of the pathophysiology of various specific impairments including lower motor neuron weakness' and 'management approaches for specific impairments including lower motor neuron weakness'.This article explores weakness as a lower motor symptom. Weakness as a primary impairment of neuromuscular diseases is addressed, with recognition of the phenomenon of disuse atrophy, and how weakness impacts on the functional abilities of people with myopathy and neuropathy. Interventions to reduce weakness or address the functional consequences of weakness are evaluated with consideration of safety and clinical application. This paper will allow readers to: (1) appraise the contribution of research in rehabilitation of lower motor neuron weakness to clinical decision making and (2) engage with the issues that arise when researching rehabilitation interventions for lower motor neuron weakness. Impairments associated with neuromuscular conditions can lead to significant functional difficulties that can impact on a person's daily participation. This article focuses on the primary impairment of weakness and explores the research evidence for rehabilitation interventions that directly influence weakness or address the impact of weakness on function.

  14. Weak ergodicity of population evolution processes.

    PubMed

    Inaba, H

    1989-10-01

    The weak ergodic theorems of mathematical demography state that the age distribution of a closed population is asymptotically independent of the initial distribution. In this paper, we provide a new proof of the weak ergodic theorem of the multistate population model with continuous time. The main tool to attain this purpose is a theory of multiplicative processes, which was mainly developed by Garrett Birkhoff, who showed that ergodic properties generally hold for an appropriate class of multiplicative processes. First, we construct a general theory of multiplicative processes on a Banach lattice. Next, we formulate a dynamical model of a multistate population and show that its evolution operator forms a multiplicative process on the state space of the population. Subsequently, we investigate a sufficient condition that guarantees the weak ergodicity of the multiplicative process. Finally, we prove the weak and strong ergodic theorems for the multistate population and resolve the consistency problem.

  15. Revisiting Weak Emission-line Quasars with a Simple Approach to Deduce their Nature and the Tracers of X-ray Weakness

    NASA Astrophysics Data System (ADS)

    Ni, Qingling

    2018-01-01

    We present an X-ray and multi-wavelength study of 17 “bridge” weak emission-line quasars (WLQs) and 16 “extreme” WLQs naturally divided by their C IV rest equivalent widths (REWs), which constitute our clean WLQ sample together. New Chandra 3.1-4.8 ks observations were obtained for 14 objects while the other 19 have archival X-ray observations. 4 of the 17 bridge WLQs appear to be X-ray weak, while 9 of the 16 extreme WLQs appear to be X-ray weak. The X-ray weak fraction in the bridge sample (23.5%) is lower than in the extreme sample(56.3%), indicating the fraction of X-ray weak objects along with rising C IV REWs.X-ray stacking analysis is performed for the X-ray weak WLQs in the clean sample. We measured a relatively hard (Γeff=1.37) effective power-law photon index for a stack of the X-ray weak subsample, suggesting X-ray absorption due to shielding material inside the broad emission-line region (BELR). We proposed a geometrically and optically thick inner accretion disk as the natural shield, which could also explain the behavior of the X-ray weak fraction along with C IV REW.Futhermore, we ran Peto-Prentice tests to assess if the distributions of optical-UV spectral properties are different between X-ray weak WLQs and X-ray normal WLQs. We also examined correlations between △αOX and optical-UV spectral properties. The C IV REW, C IV blueshift, C IV FWHM, REWs of the Si IV, λ1900, Fe II, and Mg II emission features, and the relative SDSS color △(g - i) are examined in our study. △(g - i) turned out to be the most effective tracer of X-ray weakness.

  16. Identification of interfaces involved in weak interactions with application to F-actin-aldolase rafts.

    PubMed

    Hu, Guiqing; Taylor, Dianne W; Liu, Jun; Taylor, Kenneth A

    2018-03-01

    Macromolecular interactions occur with widely varying affinities. Strong interactions form well defined interfaces but weak interactions are more dynamic and variable. Weak interactions can collectively lead to large structures such as microvilli via cooperativity and are often the precursors of much stronger interactions, e.g. the initial actin-myosin interaction during muscle contraction. Electron tomography combined with subvolume alignment and classification is an ideal method for the study of weak interactions because a 3-D image is obtained for the individual interactions, which subsequently are characterized collectively. Here we describe a method to characterize heterogeneous F-actin-aldolase interactions in 2-D rafts using electron tomography. By forming separate averages of the two constituents and fitting an atomic structure to each average, together with the alignment information which relates the raw motif to the average, an atomic model of each crosslink is determined and a frequency map of contact residues is computed. The approach should be applicable to any large structure composed of constituents that interact weakly and heterogeneously. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Extrapolating Weak Selection in Evolutionary Games

    PubMed Central

    Wu, Bin; García, Julián; Hauert, Christoph; Traulsen, Arne

    2013-01-01

    In evolutionary games, reproductive success is determined by payoffs. Weak selection means that even large differences in game outcomes translate into small fitness differences. Many results have been derived using weak selection approximations, in which perturbation analysis facilitates the derivation of analytical results. Here, we ask whether results derived under weak selection are also qualitatively valid for intermediate and strong selection. By “qualitatively valid” we mean that the ranking of strategies induced by an evolutionary process does not change when the intensity of selection increases. For two-strategy games, we show that the ranking obtained under weak selection cannot be carried over to higher selection intensity if the number of players exceeds two. For games with three (or more) strategies, previous examples for multiplayer games have shown that the ranking of strategies can change with the intensity of selection. In particular, rank changes imply that the most abundant strategy at one intensity of selection can become the least abundant for another. We show that this applies already to pairwise interactions for a broad class of evolutionary processes. Even when both weak and strong selection limits lead to consistent predictions, rank changes can occur for intermediate intensities of selection. To analyze how common such games are, we show numerically that for randomly drawn two-player games with three or more strategies, rank changes frequently occur and their likelihood increases rapidly with the number of strategies . In particular, rank changes are almost certain for , which jeopardizes the predictive power of results derived for weak selection. PMID:24339769

  18. A novel strategy for signal denoising using reweighted SVD and its applications to weak fault feature enhancement of rotating machinery

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Jia, Xiaodong

    2017-09-01

    Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.

  19. Attending to weak signals: the leader's challenge.

    PubMed

    Kerfoot, Karlene

    2005-12-01

    Halverson and Isham (2003) quote sources that report the accidental death rate of simply being in a hospital is " ... four hundred times more likely than your risk of death from traveling by train, forty times higher than driving a car, and twenty times higher than flying in a commercial aircraft" (p. 13). High-reliability organizations such as nuclear power plants and aircraft carriers have been pioneers in the business of recognizing weak signals. Weike and Sutcliffe (2001) note that high-reliability organizations distinguish themselves from others because of their mindfulness which enables them to see the significance of weak signals and to give strong interventions to weak signals. To act mindfully, these organizations have an underlying mental model of continually updating, anticipating, and focusing the possibility of failure using the intelligence that weak signals provides. Much of what happens is unexpected in health care. However, with a culture that is continually looking for weak signals, and intervenes and rescues when these signals are detected, the unexpected happens less often. This is the epitome of how leaders can build a culture of safety that focuses on recognizing the weak signals to manage the unforeseen.

  20. Attending to weak signals: the leader's challenge.

    PubMed

    Kerfoot, Karlene

    2004-01-01

    Halverson and Isham (2003) quote sources that report the accidental death rate of simply being in a hospital is "... four hundred times more likely than your risk of death from traveling by train, forty times higher than driving a car, and twenty times higher than flying in a commercial aircraft" (p. 13). High-reliability organizations such as nuclear power plants and aircraft carriers have been pioneers in the business of recognizing weak signals. Weike and Sutcliffe (2001) note that high-reliability organizations distinguish themselves from others because of their mindfulness which enables them to see the significance of weak signals and to give strong interventions to weak signals. To act mindfully, these organizations have an underlying mental model of continually updating, anticipating, and focusing the possibility of failure using the intelligence that weak signals provides. Much of what happens is unexpected in health care. However, with a culture that is continually looking for weak signals, and intervenes and rescues when these signals are detected, the unexpected happens less often. This is the epitome of how leaders can build a culture of safety that focuses on recognizing the weak signals to manage the unforeseen.

  1. Attending to weak signals: the leader's challenge.

    PubMed

    Kerfoot, Karlene

    2003-01-01

    Halverson and Isham (2003) quote sources that report the accidental death rate of simply being in a hospital is "...four hundred times more likely than your risk of death from traveling by train, forty times higher than driving a car, and twenty times higher than flying in a commercial aircraft" (p. 13). High-reliability organizations such as nuclear power plants and aircraft carriers have been pioneers in the business of recognizing weak signals. Weike and Sutcliffe (2001) note that high-reliability organizations distinguish themselves from others because of their mindfulness which enables them to see the significance of weak signals and to give strong interventions to weak signals. To act mindfully, these organizations have an underlying mental model of continually updating, anticipating, and focusing the possibility of failure using the intelligence that weak signals provides. Much of what happens is unexpected in health care. However, with a culture that is continually looking for weak signals, and intervenes and rescues when these signals are detected, the unexpected happens less often. This is the epitome of how leaders can build a culture of safety that focuses on recognizing the weak signals to manage the unforeseen.

  2. Evaluating the "recovery level" of endangered species without prior information before alien invasion.

    PubMed

    Watari, Yuya; Nishijima, Shota; Fukasawa, Marina; Yamada, Fumio; Abe, Shintaro; Miyashita, Tadashi

    2013-11-01

    For maintaining social and financial support for eradication programs of invasive species, quantitative assessment of recovery of native species or ecosystems is important because it provides a measurable parameter of success. However, setting a concrete goal for recovery is often difficult owing to lack of information prior to the introduction of invaders. Here, we present a novel approach to evaluate the achievement level of invasive predator management based on the carrying capacity of endangered species estimated using long-term monitoring data. In Amami-Oshima Island, Japan, where the eradication project of introduced small Indian mongoose is ongoing since 2000, we surveyed the population densities of four endangered species threatened by the mongoose (Amami rabbit, the Otton frog, Amami tip-nosed frog, and Amami Ishikawa's frog) at four time points ranging from 2003 to 2011. We estimated the carrying capacities of these species using the logistic growth model combined with the effects of mongoose predation and environmental heterogeneity. All species showed clear tendencies toward increasing their density in line with decreased mongoose density, and they exhibited density-dependent population growth. The estimated carrying capacities of three endangered species had small confidence intervals enough to measure recovery levels by the mongoose management. The population density of each endangered species has recovered to the level of the carrying capacity at about 20-40% of all sites, whereas no individuals were observed at more than 25% of all sites. We propose that the present approach involving appropriate monitoring data of native organism populations will be widely applicable to various eradication projects and provide unambiguous goals for management of invasive species.

  3. Spatial Acuity and Prey Detection in Weakly Electric Fish

    PubMed Central

    Babineau, David; Lewis, John E; Longtin, André

    2007-01-01

    It is well-known that weakly electric fish can exhibit extreme temporal acuity at the behavioral level, discriminating time intervals in the submicrosecond range. However, relatively little is known about the spatial acuity of the electrosense. Here we use a recently developed model of the electric field generated by Apteronotus leptorhynchus to study spatial acuity and small signal extraction. We show that the quality of sensory information available on the lateral body surface is highest for objects close to the fish's midbody, suggesting that spatial acuity should be highest at this location. Overall, however, this information is relatively blurry and the electrosense exhibits relatively poor acuity. Despite this apparent limitation, weakly electric fish are able to extract the minute signals generated by small prey, even in the presence of large background signals. In fact, we show that the fish's poor spatial acuity may actually enhance prey detection under some conditions. This occurs because the electric image produced by a spatially dense background is relatively “blurred” or spatially uniform. Hence, the small spatially localized prey signal “pops out” when fish motion is simulated. This shows explicitly how the back-and-forth swimming, characteristic of these fish, can be used to generate motion cues that, as in other animals, assist in the extraction of sensory information when signal-to-noise ratios are low. Our study also reveals the importance of the structure of complex electrosensory backgrounds. Whereas large-object spacing is favorable for discriminating the individual elements of a scene, small spacing can increase the fish's ability to resolve a single target object against this background. PMID:17335346

  4. Weakly Coretractable Modules

    NASA Astrophysics Data System (ADS)

    Hadi, Inaam M. A.; Al-aeashi, Shukur N.

    2018-05-01

    If R is a ring with identity and M is a unitary right R-module. Here we introduce the class of weakly coretractable module. Some basic properties are investigated and some relationships between these modules and other related one are introduced.

  5. Direct shear mapping - a new weak lensing tool

    NASA Astrophysics Data System (ADS)

    de Burgh-Day, C. O.; Taylor, E. N.; Webster, R. L.; Hopkins, A. M.

    2015-08-01

    We have developed a new technique called direct shear mapping (DSM) to measure gravitational lensing shear directly from observations of a single background source. The technique assumes the velocity map of an unlensed, stably rotating galaxy will be rotationally symmetric. Lensing distorts the velocity map making it asymmetric. The degree of lensing can be inferred by determining the transformation required to restore axisymmetry. This technique is in contrast to traditional weak lensing methods, which require averaging an ensemble of background galaxy ellipticity measurements, to obtain a single shear measurement. We have tested the efficacy of our fitting algorithm with a suite of systematic tests on simulated data. We demonstrate that we are in principle able to measure shears as small as 0.01. In practice, we have fitted for the shear in very low redshift (and hence unlensed) velocity maps, and have obtained null result with an error of ±0.01. This high-sensitivity results from analysing spatially resolved spectroscopic images (i.e. 3D data cubes), including not just shape information (as in traditional weak lensing measurements) but velocity information as well. Spirals and rotating ellipticals are ideal targets for this new technique. Data from any large Integral Field Unit (IFU) or radio telescope is suitable, or indeed any instrument with spatially resolved spectroscopy such as the Sydney-Australian-Astronomical Observatory Multi-Object Integral Field Spectrograph (SAMI), the Atacama Large Millimeter/submillimeter Array (ALMA), the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) and the Square Kilometer Array (SKA).

  6. Asymmetric Weakness and West Nile Virus Infection.

    PubMed

    Kuo, Dick C; Bilal, Saadiyah; Koller, Paul

    2015-09-01

    Weakness is a common presentation in the emergency department (ED). Asymmetric weakness or weakness that appears not to follow an anatomical pattern is a less common occurrence. Acute flaccid paralysis with no signs of meningoencephalitis is one of the more uncommon presentations of West Nile virus (WNV). Patient may complain of an acute onset of severe weakness, or even paralysis, in one or multiple limbs with no sensory deficits. This weakness is caused by injury to the anterior horn cells of the spinal cord. We present a case of acute asymmetric flaccid paralysis with preserved sensory responses that was eventually diagnosed as neuroinvasive WNV infection. A 31-year-old male with no medical history presented with complaints of left lower and right upper extremity weakness. Computed tomography scan was negative and multiple other studies were performed in the ED. Eventually, he was admitted to the hospital and was found to have decreased motor amplitudes, severely reduced motor neuron recruitment, and denervation on electrodiagnostic study. Cerebrospinal fluid specimen tested positive for WNV immunoglobulin (Ig) G and IgM antibodies. WHY SHOULD AN EMERGENCY PHYSICIAN BE AWARE OF THIS?: Acute asymmetric flaccid paralysis with no signs of viremia or meningoencephalitis is an unusual presentation of WNV infection. WNV should be included in the differential for patients with asymmetric weakness, especially in the summer months in areas with large mosquito populations. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Adjustment of prior constraints for an improved crop monitoring with the Earth Observation Land Data Assimilation System (EO-LDAS)

    NASA Astrophysics Data System (ADS)

    Truckenbrodt, Sina C.; Gómez-Dans, José; Stelmaszczuk-Górska, Martyna A.; Chernetskiy, Maxim; Schmullius, Christiane C.

    2017-04-01

    Throughout the past decades various satellite sensors have been launched that record reflectance in the optical domain and facilitate comprehensive monitoring of the vegetation-covered land surface from space. The interaction of photons with the canopy, leaves and soil that determines the spectrum of reflected sunlight can be simulated with radiative transfer models (RTMs). The inversion of RTMs permits the derivation of state variables such as leaf area index (LAI) and leaf chlorophyll content from top-of-canopy reflectance. Space-borne data are, however, insufficient for an unambiguous derivation of state variables and additional constraints are required to resolve this ill-posed problem. Data assimilation techniques permit the conflation of various information with due allowance for associated uncertainties. The Earth Observation Land Data Assimilation System (EO-LDAS) integrates RTMs into a dynamic process model that describes the temporal evolution of state variables. In addition, prior information is included to further constrain the inversion and enhance the state variable derivation. In previous studies on EO-LDAS, prior information was represented by temporally constant values for all investigated state variables, while information about their phenological evolution was neglected. Here, we examine to what extent the implementation of prior information reflecting the phenological variability improves the performance of EO-LDAS with respect to the monitoring of crops on the agricultural Gebesee test site (Central Germany). Various routines for the generation of prior information are tested. This involves the usage of data on state variables that was acquired in previous years as well as the application of phenological models. The performance of EO-LDAS with the newly implemented prior information is tested based on medium resolution satellite imagery (e.g., RapidEye REIS, Sentinel-2 MSI, Landsat-7 ETM+ and Landsat-8 OLI). The predicted state variables are

  8. Survival and weak chaos.

    PubMed

    Nee, Sean

    2018-05-01

    Survival analysis in biology and reliability theory in engineering concern the dynamical functioning of bio/electro/mechanical units. Here we incorporate effects of chaotic dynamics into the classical theory. Dynamical systems theory now distinguishes strong and weak chaos. Strong chaos generates Type II survivorship curves entirely as a result of the internal operation of the system, without any age-independent, external, random forces of mortality. Weak chaos exhibits (a) intermittency and (b) Type III survivorship, defined as a decreasing per capita mortality rate: engineering explicitly defines this pattern of decreasing hazard as 'infant mortality'. Weak chaos generates two phenomena from the normal functioning of the same system. First, infant mortality- sensu engineering-without any external explanatory factors, such as manufacturing defects, which is followed by increased average longevity of survivors. Second, sudden failure of units during their normal period of operation, before the onset of age-dependent mortality arising from senescence. The relevance of these phenomena encompasses, for example: no-fault-found failure of electronic devices; high rates of human early spontaneous miscarriage/abortion; runaway pacemakers; sudden cardiac death in young adults; bipolar disorder; and epilepsy.

  9. Survival and weak chaos

    PubMed Central

    2018-01-01

    Survival analysis in biology and reliability theory in engineering concern the dynamical functioning of bio/electro/mechanical units. Here we incorporate effects of chaotic dynamics into the classical theory. Dynamical systems theory now distinguishes strong and weak chaos. Strong chaos generates Type II survivorship curves entirely as a result of the internal operation of the system, without any age-independent, external, random forces of mortality. Weak chaos exhibits (a) intermittency and (b) Type III survivorship, defined as a decreasing per capita mortality rate: engineering explicitly defines this pattern of decreasing hazard as ‘infant mortality’. Weak chaos generates two phenomena from the normal functioning of the same system. First, infant mortality—sensu engineering—without any external explanatory factors, such as manufacturing defects, which is followed by increased average longevity of survivors. Second, sudden failure of units during their normal period of operation, before the onset of age-dependent mortality arising from senescence. The relevance of these phenomena encompasses, for example: no-fault-found failure of electronic devices; high rates of human early spontaneous miscarriage/abortion; runaway pacemakers; sudden cardiac death in young adults; bipolar disorder; and epilepsy. PMID:29892407

  10. Sex difference in attractiveness perceptions of strong and weak male walkers.

    PubMed

    Fink, Bernhard; André, Selina; Mines, Johanna S; Weege, Bettina; Shackelford, Todd K; Butovskaya, Marina L

    2016-11-01

    Men and women accurately assess male physical strength from facial and body morphology cues. Women's assessments of male facial attractiveness, masculinity, and dominance correlate positively with male physical strength. A positive relationship also has been reported between physical strength and attractiveness of men's dance movements. Here, we investigate men's and women's attractiveness, dominance, and strength assessments from brief samples of male gait. Handgrip strength (HGS) was measured in 70 heterosexual men and their gait was motion-captured. Men and women judged 20 precategorized strong (high HGS) and weak (low HGS) walkers on attractiveness, dominance, and strength, and provided a measure of their own HGS. Both men and women judged strong walkers higher on dominance and strength than weak walkers. Women but not men judged strong walkers more attractive than weak walkers. These effects were independent of observers' physical strength. Male physical strength is conveyed not only through facial and body morphology, but also through body movements. We discuss our findings with reference to studies suggesting that physical strength provides information about male quality in contexts of inter- and intrasexual selection. Am. J. Hum. Biol. 28:913-917, 2016. © 2016Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Order priors for Bayesian network discovery with an application to malware phylogeny

    DOE PAGES

    Oyen, Diane; Anderson, Blake; Sentz, Kari; ...

    2017-09-15

    Here, Bayesian networks have been used extensively to model and discover dependency relationships among sets of random variables. We learn Bayesian network structure with a combination of human knowledge about the partial ordering of variables and statistical inference of conditional dependencies from observed data. Our approach leverages complementary information from human knowledge and inference from observed data to produce networks that reflect human beliefs about the system as well as to fit the observed data. Applying prior beliefs about partial orderings of variables is an approach distinctly different from existing methods that incorporate prior beliefs about direct dependencies (or edges)more » in a Bayesian network. We provide an efficient implementation of the partial-order prior in a Bayesian structure discovery learning algorithm, as well as an edge prior, showing that both priors meet the local modularity requirement necessary for an efficient Bayesian discovery algorithm. In benchmark studies, the partial-order prior improves the accuracy of Bayesian network structure learning as well as the edge prior, even though order priors are more general. Our primary motivation is in characterizing the evolution of families of malware to aid cyber security analysts. For the problem of malware phylogeny discovery, we find that our algorithm, compared to existing malware phylogeny algorithms, more accurately discovers true dependencies that are missed by other algorithms.« less

  12. Order priors for Bayesian network discovery with an application to malware phylogeny

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyen, Diane; Anderson, Blake; Sentz, Kari

    Here, Bayesian networks have been used extensively to model and discover dependency relationships among sets of random variables. We learn Bayesian network structure with a combination of human knowledge about the partial ordering of variables and statistical inference of conditional dependencies from observed data. Our approach leverages complementary information from human knowledge and inference from observed data to produce networks that reflect human beliefs about the system as well as to fit the observed data. Applying prior beliefs about partial orderings of variables is an approach distinctly different from existing methods that incorporate prior beliefs about direct dependencies (or edges)more » in a Bayesian network. We provide an efficient implementation of the partial-order prior in a Bayesian structure discovery learning algorithm, as well as an edge prior, showing that both priors meet the local modularity requirement necessary for an efficient Bayesian discovery algorithm. In benchmark studies, the partial-order prior improves the accuracy of Bayesian network structure learning as well as the edge prior, even though order priors are more general. Our primary motivation is in characterizing the evolution of families of malware to aid cyber security analysts. For the problem of malware phylogeny discovery, we find that our algorithm, compared to existing malware phylogeny algorithms, more accurately discovers true dependencies that are missed by other algorithms.« less

  13. AN ORIENTATIONAL RESPONSE TO WEAK GAMMA RADIATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, F.A. Jr.

    1963-10-01

    The common planarian worm, Duesia dorotocephsla, displays a significant orientational response to increase in Cs/sup 137/ gamma radiation when the increase is no greater than six times background. The worms are able to distinguish the direction of the weak gamma source, turning away from it, whether it is presented on the right or left side. The response sign is, therefore, the same as that of the response of these negatively phototactic worms to visible light. There is a clear compass-directional relationship of the responsiveness to the experimental gamma radiation. A conspicuous negative response is present when the worms are travelingmore » northward or southward in the earth's field with the gamma change in an east-west axis. No statistically significant mean turning response to the gamma radiation is found when the worms are traveling eastward or westward in the earth's field with the gamma change in a north-south axis. The previously observed annual fluctuation in the character of the monthly orientational rhythm of north-directed worms has been confirmed in an additional year of study. During colder months, the rhythm is monthly; during warmer months it is semi-monthly. There is a semi-monthly fluctuation in the response of Dugesia to weak gamma radiation during mid-morning hours, the worms turning away from the source for four days prior to new end full moon, and toward it for two days following new and full moon. The stronger the field strength, up to 9 times backgound, the larger the amplitude of the rhythm. There is a direct relationship between intensities of gamma radiation between that of background and nine times backgound, and the strength of the negative response of the worms. Evidence suggests that the negative response of Dugesia to a gamma source may be modified by experimental alteration of the natural ambient electrostatic field. Some possible biological significances of this remarkable responsiveness to gamma radiation, and its particular

  14. Discovery and problem solving: Triangulation as a weak heuristic

    NASA Technical Reports Server (NTRS)

    Rochowiak, Daniel

    1987-01-01

    Recently the artificial intelligence community has turned its attention to the process of discovery and found that the history of science is a fertile source for what Darden has called compiled hindsight. Such hindsight generates weak heuristics for discovery that do not guarantee that discoveries will be made but do have proven worth in leading to discoveries. Triangulation is one such heuristic that is grounded in historical hindsight. This heuristic is explored within the general framework of the BACON, GLAUBER, STAHL, DALTON, and SUTTON programs. In triangulation different bases of information are compared in an effort to identify gaps between the bases. Thus, assuming that the bases of information are relevantly related, the gaps that are identified should be good locations for discovery and robust analysis.

  15. Cosmology with photometric weak lensing surveys: Constraints with redshift tomography of convergence peaks and moments

    DOE PAGES

    Petri, Andrea; May, Morgan; Haiman, Zoltán

    2016-09-30

    Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w. When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ω m,w,σ 8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. Here we find that redshift tomography with the power spectrum reduces the area of the 1σ confidence interval in (Ω m,w) space by a factor ofmore » 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ω m,w) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. In conclusion, we find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.« less

  16. Neutrino mass and dark energy from weak lensing.

    PubMed

    Abazajian, Kevork N; Dodelson, Scott

    2003-07-25

    Weak gravitational lensing of background galaxies by intervening matter directly probes the mass distribution in the Universe. This distribution is sensitive to both the dark energy and neutrino mass. We examine the potential of lensing experiments to measure features of both simultaneously. Focusing on the radial information contained in a future deep 4000 deg(2) survey, we find that the expected (1-sigma) error on a neutrino mass is 0.1 eV, if the dark-energy parameters are allowed to vary. The constraints on dark-energy parameters are similarly restrictive, with errors on w of 0.09.

  17. Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints

    NASA Astrophysics Data System (ADS)

    Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena

    2012-04-01

    Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an

  18. Counterfactual quantum cryptography based on weak coherent states

    NASA Astrophysics Data System (ADS)

    Yin, Zhen-Qiang; Li, Hong-Wei; Yao, Yao; Zhang, Chun-Mei; Wang, Shuang; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2012-08-01

    In the “counterfactual quantum cryptography” scheme [T.-G. Noh, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.103.230501 103, 230501 (2009)], two legitimate distant peers may share secret-key bits even when the information carriers do not travel in the quantum channel. The security of this protocol with an ideal single-photon source has been proved by Yin [Z.-Q. Yin, H. W. Li, W. Chen, Z. F. Han, and G. C. Guo, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.042335 82, 042335 (2010)]. In this paper, we prove the security of the counterfactual-quantum-cryptography scheme based on a commonly used weak-coherent-laser source by considering a general collective attack. The basic assumption of this proof is that the efficiency and dark-counting rate of a single-photon detector are consistent for any n-photon Fock states. Then through randomizing the phases of the encoding weak coherent states, Eve's ancilla will be transformed into a classical mixture. Finally, the lower bound of the secret-key-bit rate and a performance analysis for the practical implementation are both given.

  19. New prior sampling methods for nested sampling - Development and testing

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie; Tuyl, Frank; Hudson, Irene

    2017-06-01

    Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].

  20. 76 FR 66741 - Agency Information Collection Activities: Prior Disclosure

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-27

    ... forms of information technology; and (e) the annual cost burden to respondents or record keepers from... Responses: 3,500. Estimated Time per Response: 1 hour. Estimated Total Annual Burden Hours: 3,500. Dated...

  1. Imaging performance of a hybrid x-ray computed tomography-fluorescence molecular tomography system using priors.

    PubMed

    Ale, Angelique; Schulz, Ralf B; Sarantopoulos, Athanasios; Ntziachristos, Vasilis

    2010-05-01

    The performance is studied of two newly introduced and previously suggested methods that incorporate priors into inversion schemes associated with data from a recently developed hybrid x-ray computed tomography and fluorescence molecular tomography system, the latter based on CCD camera photon detection. The unique data set studied attains accurately registered data of high spatially sampled photon fields propagating through tissue along 360 degrees projections. Approaches that incorporate structural prior information were included in the inverse problem by adding a penalty term to the minimization function utilized for image reconstructions. Results were compared as to their performance with simulated and experimental data from a lung inflammation animal model and against the inversions achieved when not using priors. The importance of using priors over stand-alone inversions is also showcased with high spatial sampling simulated and experimental data. The approach of optimal performance in resolving fluorescent biodistribution in small animals is also discussed. Inclusion of prior information from x-ray CT data in the reconstruction of the fluorescence biodistribution leads to improved agreement between the reconstruction and validation images for both simulated and experimental data.

  2. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  3. Strengths and weaknesses in the consultation skills of senior medical students: identification, enhancement and curricular change.

    PubMed

    Hastings, Am; McKinley, R K; Fraser, R C

    2006-05-01

    This paper seeks to describe the consultation strengths and weaknesses of senior medical students, the explicit and prioritised strategies for improvement utilised in student feedback, and curriculum developments informed by this work. Prospective, descriptive study of students on clinical placements in general practice. All were observed directly by 2 assessors in consultation with 5 patients in a general practice setting. Performance was judged against 5 categories of consultation competence and 35 component competences as contained in a modified version of the Leicester Assessment Package. Specific strategies for improvement were selected from a list of 69 previously formulated strategies. Data from 1116 students were included. The consultation competences identified most frequently as strengths related to interpersonal skills, while weaknesses were mainly in the domain of clinical problem-solving. The median number of key strengths identified per student was 5, with 5 additional but lesser strengths. A median of 3 key and lesser weaknesses were identified. The average number of strategies selected to address an identified weakness was 1.2. Students rated the assessment process and its impact very positively. The systematic assessment of the consultation competence of medical students by direct observation involving real patients is feasible and facilitates the 'educational diagnosis' of individuals and of their peer group. It has informed development of teaching and generated research hypotheses.

  4. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  5. Measuring Incompatible Observables by Exploiting Sequential Weak Values.

    PubMed

    Piacentini, F; Avella, A; Levi, M P; Gramegna, M; Brida, G; Degiovanni, I P; Cohen, E; Lussana, R; Villa, F; Tosi, A; Zappa, F; Genovese, M

    2016-10-21

    One of the most intriguing aspects of quantum mechanics is the impossibility of measuring at the same time observables corresponding to noncommuting operators, because of quantum uncertainty. This impossibility can be partially relaxed when considering joint or sequential weak value evaluation. Indeed, weak value measurements have been a real breakthrough in the quantum measurement framework that is of the utmost interest from both a fundamental and an applicative point of view. In this Letter, we show how we realized for the first time a sequential weak value evaluation of two incompatible observables using a genuine single-photon experiment. These (sometimes anomalous) sequential weak values revealed the single-operator weak values, as well as the local correlation between them.

  6. Measuring Incompatible Observables by Exploiting Sequential Weak Values

    NASA Astrophysics Data System (ADS)

    Piacentini, F.; Avella, A.; Levi, M. P.; Gramegna, M.; Brida, G.; Degiovanni, I. P.; Cohen, E.; Lussana, R.; Villa, F.; Tosi, A.; Zappa, F.; Genovese, M.

    2016-10-01

    One of the most intriguing aspects of quantum mechanics is the impossibility of measuring at the same time observables corresponding to noncommuting operators, because of quantum uncertainty. This impossibility can be partially relaxed when considering joint or sequential weak value evaluation. Indeed, weak value measurements have been a real breakthrough in the quantum measurement framework that is of the utmost interest from both a fundamental and an applicative point of view. In this Letter, we show how we realized for the first time a sequential weak value evaluation of two incompatible observables using a genuine single-photon experiment. These (sometimes anomalous) sequential weak values revealed the single-operator weak values, as well as the local correlation between them.

  7. An integrative framework for Bayesian variable selection with informative priors for identifying genes and pathways.

    PubMed

    Peng, Bin; Zhu, Dianwen; Ander, Bradley P; Zhang, Xiaoshuai; Xue, Fuzhong; Sharp, Frank R; Yang, Xiaowei

    2013-01-01

    The discovery of genetic or genomic markers plays a central role in the development of personalized medicine. A notable challenge exists when dealing with the high dimensionality of the data sets, as thousands of genes or millions of genetic variants are collected on a relatively small number of subjects. Traditional gene-wise selection methods using univariate analyses face difficulty to incorporate correlational, structural, or functional structures amongst the molecular measures. For microarray gene expression data, we first summarize solutions in dealing with 'large p, small n' problems, and then propose an integrative Bayesian variable selection (iBVS) framework for simultaneously identifying causal or marker genes and regulatory pathways. A novel partial least squares (PLS) g-prior for iBVS is developed to allow the incorporation of prior knowledge on gene-gene interactions or functional relationships. From the point view of systems biology, iBVS enables user to directly target the joint effects of multiple genes and pathways in a hierarchical modeling diagram to predict disease status or phenotype. The estimated posterior selection probabilities offer probabilitic and biological interpretations. Both simulated data and a set of microarray data in predicting stroke status are used in validating the performance of iBVS in a Probit model with binary outcomes. iBVS offers a general framework for effective discovery of various molecular biomarkers by combining data-based statistics and knowledge-based priors. Guidelines on making posterior inferences, determining Bayesian significance levels, and improving computational efficiencies are also discussed.

  8. Weak Value Amplification of a Post-Selected Single Photon

    NASA Astrophysics Data System (ADS)

    Hallaji, Matin

    Weak value amplification (WVA) is a measurement technique in which the effect of a pre- and post-selected system on a weakly interacting probe is magnified. In this thesis, I present the first experimental observation of WVA of a single photon. We observed that a signal photon --- sent through a polarization interferometer and post-selected by photodetection in the almost-dark port --- can act like eight photons. The effect of this single photon is measured as a nonlinear phase shift on a separate laser beam. The interaction between the two is mediated by a sample of laser- cooled 85Rb atoms. Electromagnetically induced transparency (EIT) is used to enhance the nonlinearity and overcome resonant absorption. I believe this work to be the first demonstration of WVA where a deterministic interaction is used to entangle two distinct optical systems. In WVA, the amplification is contingent on discarding a large portion of the original data set. While amplification increases measurement sensitivity, discarding data worsens it. Questioning whether these competing effects conspire to improve or diminish measurement accuracy has resulted recently in controversy. I address this question by calculating the maximum amount of information achievable with the WVA technique. By comparing this information to that achievable by the standard technique, where no post-selection is employed, I show that the WVA technique can be advantageous under a certain class of noise models. Finally, I propose a way to optimally apply the WVA technique.

  9. New insights on emergence from the perspective of weak values and dynamical non-locality

    NASA Astrophysics Data System (ADS)

    Tollaksen, Jeff

    2014-04-01

    In this article, we will examine new fundamental aspects of "emergence" and "information" using novel approaches to quantum mechanics which originated from the group around Aharonov. The two-state vector formalism provides a complete description of pre- and post-selected quantum systems and has uncovered a host of new quantum phenomena which were previously hidden. The most important feature is that any weak coupling to a pre- and post-selected system is effectively a coupling to a "weak value" which is given by a simple expression depending on the two-state vector. In particular, weak values, are the outcomes of so called "weak measurements" which have recently become a very powerful tool for ultra-sensitive measurements. Using weak values, we will show how to separate a particle from its properties, not unlike the Cheshire cat story: "Well! I've often seen a cat without a grin," thought Alice; "but a grin without a cat! It's the most curious thing I ever saw in all my life!" Next, we address the question whether the physics on different scales "emerges" from quantum mechanics or whether the laws of physics at those scales are fundamental. We show that the classical limit of quantum mechanics is a far more complicated issue; it is in fact dramatically more involved and it requires a complete revision of all our intuitions. The revised intuitions can then serve as a guide to finding novel quantum effects. Next we show that novel experimental aspects of contextuality can be demonstrated with weak measurements and these suggest new restrictions on hidden variable approaches. Next we emphasize that the most important implication of the Aharonov-Bohm effect is the existence of non-local interactions which do not violate causality. Finally, we review some generalizations of quantum mechanics and their implications for "emergence" and "information." First, we review an alternative approach to quantum evolution in which each moment of time is viewed as a new "universe

  10. Impact of Bayesian Priors on the Characterization of Binary Black Hole Coalescences

    NASA Astrophysics Data System (ADS)

    Vitale, Salvatore; Gerosa, Davide; Haster, Carl-Johan; Chatziioannou, Katerina; Zimmerman, Aaron

    2017-12-01

    In a regime where data are only mildly informative, prior choices can play a significant role in Bayesian statistical inference, potentially affecting the inferred physics. We show this is indeed the case for some of the parameters inferred from current gravitational-wave measurements of binary black hole coalescences. We reanalyze the first detections performed by the twin LIGO interferometers using alternative (and astrophysically motivated) prior assumptions. We find different prior distributions can introduce deviations in the resulting posteriors that impact the physical interpretation of these systems. For instance, (i) limits on the 90% credible interval on the effective black hole spin χeff are subject to variations of ˜10 % if a prior with black hole spins mostly aligned to the binary's angular momentum is considered instead of the standard choice of isotropic spin directions, and (ii) under priors motivated by the initial stellar mass function, we infer tighter constraints on the black hole masses, and in particular, we find no support for any of the inferred masses within the putative mass gap M ≲5 M⊙.

  11. Impact of Bayesian Priors on the Characterization of Binary Black Hole Coalescences.

    PubMed

    Vitale, Salvatore; Gerosa, Davide; Haster, Carl-Johan; Chatziioannou, Katerina; Zimmerman, Aaron

    2017-12-22

    In a regime where data are only mildly informative, prior choices can play a significant role in Bayesian statistical inference, potentially affecting the inferred physics. We show this is indeed the case for some of the parameters inferred from current gravitational-wave measurements of binary black hole coalescences. We reanalyze the first detections performed by the twin LIGO interferometers using alternative (and astrophysically motivated) prior assumptions. We find different prior distributions can introduce deviations in the resulting posteriors that impact the physical interpretation of these systems. For instance, (i) limits on the 90% credible interval on the effective black hole spin χ_{eff} are subject to variations of ∼10% if a prior with black hole spins mostly aligned to the binary's angular momentum is considered instead of the standard choice of isotropic spin directions, and (ii) under priors motivated by the initial stellar mass function, we infer tighter constraints on the black hole masses, and in particular, we find no support for any of the inferred masses within the putative mass gap M≲5  M_{⊙}.

  12. Evaluating the use of prior information under different pacing conditions on aircraft inspection performance: The use of virtual reality technology

    NASA Astrophysics Data System (ADS)

    Bowling, Shannon Raye

    The aircraft maintenance industry is a complex system consisting of human and machine components, because of this; much emphasis has been placed on improving aircraft-inspection performance. One proven technique for improving inspection performance is the use of training. There are several strategies that have been implemented for training, one of which is feedforward information. The use of prior information (feedforward) is known to positively affect inspection performance. This information can consist of knowledge about defect characteristics (types, severity/criticality, and location) and the probability of occurrence. Although several studies have been conducted that demonstrate the usefulness of feedforward as a training strategy, there are certain research issues that need to be addressed. This study evaluates the effect of feedforward information in a simulated 3-dimensional environment by the use of virtual reality. A controlled study was conducted to evaluate the effectiveness of feedforward information in a simulated aircraft inspection environment. The study was conducted in two phases. The first phase evaluated the difference between general and detailed inspection at different pacing levels. The second phase evaluated the effect of feedforward information pertaining to severity, probability and location. Analyses of the results showed that subjects performing detailed inspection performed significantly better than while performing general inspection. Pacing also had the effect of reducing performance for both general and detailed inspection. The study also found that as the level of feedforward information increases, performance also increases. In addition to evaluating performance measures, the study also evaluated process and subjective measures. It was found that process measures such as number of fixation points, fixation groups, mean fixation duration, and percent area covered were all affected by the treatment levels. Analyses of the subjective

  13. The effects of activating prior topic and metacognitive knowledge on text comprehension scores.

    PubMed

    Kostons, Danny; van der Werf, Greetje

    2015-09-01

    Research on prior knowledge activation has consistently shown that activating learners' prior knowledge has beneficial effects on learning. If learners activate their prior knowledge, this activated knowledge serves as a framework for establishing relationships between the knowledge they already possess and new information provided to them. Thus far, prior knowledge activation has dealt primarily with topic knowledge in specific domains. Students, however, likely also possess at least some metacognitive knowledge useful in those domains, which, when activated, should aid in the deployment of helpful strategies during reading. In this study, we investigated the effects of both prior topic knowledge activation (PTKA) and prior metacognitive knowledge activation (PMKA) on text comprehension scores. Eighty-eight students in primary education were randomly distributed amongst the conditions of the 2 × 2 (PTKA yes/no × PMKA yes/no) designed experiment. Results show that activating prior metacognitive knowledge had a beneficial effect on text comprehension, whereas activating prior topic knowledge, after correcting for the amount of prior knowledge, did not. Most studies deal with explicit instruction of metacognitive knowledge, but our results show that this may not be necessary, specifically in the case of students who already have some metacognitive knowledge. However, existing metacognitive knowledge needs to be activated in order for students to make better use of this knowledge. © 2015 The British Psychological Society.

  14. Neutron Measurements and the Weak Nucleon-Nucleon Interaction

    PubMed Central

    Snow, W. M.

    2005-01-01

    The weak interaction between nucleons remains one of the most poorly-understood sectors of the Standard Model. A quantitative description of this interaction is needed to understand weak interaction phenomena in atomic, nuclear, and hadronic systems. This paper summarizes briefly what is known about the weak nucleon-nucleon interaction, tries to place this phenomenon in the context of other studies of the weak and strong interactions, and outlines a set of measurements involving low energy neutrons which can lead to significant experimental progress. PMID:27308120

  15. Graph-Based Weakly-Supervised Methods for Information Extraction & Integration

    ERIC Educational Resources Information Center

    Talukdar, Partha Pratim

    2010-01-01

    The variety and complexity of potentially-related data resources available for querying--webpages, databases, data warehouses--has been growing ever more rapidly. There is a growing need to pose integrative queries "across" multiple such sources, exploiting foreign keys and other means of interlinking data to merge information from diverse…

  16. Partial quantum information.

    PubMed

    Horodecki, Michał; Oppenheim, Jonathan; Winter, Andreas

    2005-08-04

    Information--be it classical or quantum--is measured by the amount of communication needed to convey it. In the classical case, if the receiver has some prior information about the messages being conveyed, less communication is needed. Here we explore the concept of prior quantum information: given an unknown quantum state distributed over two systems, we determine how much quantum communication is needed to transfer the full state to one system. This communication measures the partial information one system needs, conditioned on its prior information. We find that it is given by the conditional entropy--a quantity that was known previously, but lacked an operational meaning. In the classical case, partial information must always be positive, but we find that in the quantum world this physical quantity can be negative. If the partial information is positive, its sender needs to communicate this number of quantum bits to the receiver; if it is negative, then sender and receiver instead gain the corresponding potential for future quantum communication. We introduce a protocol that we term 'quantum state merging' which optimally transfers partial information. We show how it enables a systematic understanding of quantum network theory, and discuss several important applications including distributed compression, noiseless coding with side information, multiple access channels and assisted entanglement distillation.

  17. Weak Interactions Group

    Science.gov Websites

    Weak Interactions Group UC Berkeley UC Berkeley Physics Lawrence Berkeley Lab Nuclear Science Division at LBL Physics Division at LBL Phonebook A-Z Index Navigation Home Members Research Projects CUORE Design Concept Berkeley Projects People Publications Contact Links KamLAND Physics Impact Neutrino

  18. Noninformative prior in the quantum statistical model of pure states

    NASA Astrophysics Data System (ADS)

    Tanaka, Fuyuhiko

    2012-06-01

    In the present paper, we consider a suitable definition of a noninformative prior on the quantum statistical model of pure states. While the full pure-states model is invariant under unitary rotation and admits the Haar measure, restricted models, which we often see in quantum channel estimation and quantum process tomography, have less symmetry and no compelling rationale for any choice. We adopt a game-theoretic approach that is applicable to classical Bayesian statistics and yields a noninformative prior for a general class of probability distributions. We define the quantum detection game and show that there exist noninformative priors for a general class of a pure-states model. Theoretically, it gives one of the ways that we represent ignorance on the given quantum system with partial information. Practically, our method proposes a default distribution on the model in order to use the Bayesian technique in the quantum-state tomography with a small sample.

  19. Weak crystallization theory of metallic alloys

    DOE PAGES

    Martin, Ivar; Gopalakrishnan, Sarang; Demler, Eugene A.

    2016-06-20

    Crystallization is one of the most familiar, but hardest to analyze, phase transitions. The principal reason is that crystallization typically occurs via a strongly first-order phase transition, and thus rigorous treatment would require comparing energies of an infinite number of possible crystalline states with the energy of liquid. A great simplification occurs when crystallization transition happens to be weakly first order. In this case, weak crystallization theory, based on unbiased Ginzburg-Landau expansion, can be applied. Even beyond its strict range of validity, it has been a useful qualitative tool for understanding crystallization. In its standard form, however, weak crystallization theorymore » cannot explain the existence of a majority of observed crystalline and quasicrystalline states. Here we extend the weak crystallization theory to the case of metallic alloys. In this paper, we identify a singular effect of itinerant electrons on the form of weak crystallization free energy. It is geometric in nature, generating strong dependence of free energy on the angles between ordering wave vectors of ionic density. That leads to stabilization of fcc, rhombohedral, and icosahedral quasicrystalline (iQC) phases, which are absent in the generic theory with only local interactions. Finally, as an application, we find the condition for stability of iQC that is consistent with the Hume-Rothery rules known empirically for the majority of stable iQC; namely, the length of the primary Bragg-peak wave vector is approximately equal to the diameter of the Fermi sphere.« less

  20. Large-scale weakly supervised object localization via latent category learning.

    PubMed

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  1. External priors for the next generation of CMB experiments

    DOE PAGES

    Manzotti, Alessandro; Dodelson, Scott; Park, Youngsoo

    2016-03-28

    Planned cosmic microwave background (CMB) experiments can dramatically improve what we know about neutrino physics, inflation, and dark energy. The low level of noise, together with improved angular resolution, will increase the signal to noise of the CMB polarized signal as well as the reconstructed lensing potential of high redshift large scale structure. Projected constraints on cosmological parameters are extremely tight, but these can be improved even further with information from external experiments. Here, we examine quantitatively the extent to which external priors can lead to improvement in projected constraints from a CMB-Stage IV (S4) experiment on neutrino and dark energy properties. We find that CMB S4 constraints on neutrino mass could be strongly enhanced by external constraints on the cold dark matter densitymore » $$\\Omega_{c}h^{2}$$ and the Hubble constant $$H_{0}$$. If polarization on the largest scales ($$\\ell<50$$) will not be measured, an external prior on the primordial amplitude $$A_{s}$$ or the optical depth $$\\tau$$ will also be important. A CMB constraint on the number of relativistic degrees of freedom, $$N_{\\rm eff}$$, will benefit from an external prior on the spectral index $$n_{s}$$ and the baryon energy density $$\\Omega_{b}h^{2}$$. Lastly, an external prior on $$H_{0}$$ will help constrain the dark energy equation of state ($w$).« less

  2. An Integrative Framework for Bayesian Variable Selection with Informative Priors for Identifying Genes and Pathways

    PubMed Central

    Ander, Bradley P.; Zhang, Xiaoshuai; Xue, Fuzhong; Sharp, Frank R.; Yang, Xiaowei

    2013-01-01

    The discovery of genetic or genomic markers plays a central role in the development of personalized medicine. A notable challenge exists when dealing with the high dimensionality of the data sets, as thousands of genes or millions of genetic variants are collected on a relatively small number of subjects. Traditional gene-wise selection methods using univariate analyses face difficulty to incorporate correlational, structural, or functional structures amongst the molecular measures. For microarray gene expression data, we first summarize solutions in dealing with ‘large p, small n’ problems, and then propose an integrative Bayesian variable selection (iBVS) framework for simultaneously identifying causal or marker genes and regulatory pathways. A novel partial least squares (PLS) g-prior for iBVS is developed to allow the incorporation of prior knowledge on gene-gene interactions or functional relationships. From the point view of systems biology, iBVS enables user to directly target the joint effects of multiple genes and pathways in a hierarchical modeling diagram to predict disease status or phenotype. The estimated posterior selection probabilities offer probabilitic and biological interpretations. Both simulated data and a set of microarray data in predicting stroke status are used in validating the performance of iBVS in a Probit model with binary outcomes. iBVS offers a general framework for effective discovery of various molecular biomarkers by combining data-based statistics and knowledge-based priors. Guidelines on making posterior inferences, determining Bayesian significance levels, and improving computational efficiencies are also discussed. PMID:23844055

  3. Comparison of different strategies for using fossil calibrations to generate the time prior in Bayesian molecular clock dating.

    PubMed

    Barba-Montoya, Jose; Dos Reis, Mario; Yang, Ziheng

    2017-09-01

    Fossil calibrations are the utmost source of information for resolving the distances between molecular sequences into estimates of absolute times and absolute rates in molecular clock dating analysis. The quality of calibrations is thus expected to have a major impact on divergence time estimates even if a huge amount of molecular data is available. In Bayesian molecular clock dating, fossil calibration information is incorporated in the analysis through the prior on divergence times (the time prior). Here, we evaluate three strategies for converting fossil calibrations (in the form of minimum- and maximum-age bounds) into the prior on times, which differ according to whether they borrow information from the maximum age of ancestral nodes and minimum age of descendent nodes to form constraints for any given node on the phylogeny. We study a simple example that is analytically tractable, and analyze two real datasets (one of 10 primate species and another of 48 seed plant species) using three Bayesian dating programs: MCMCTree, MrBayes and BEAST2. We examine how different calibration strategies, the birth-death process, and automatic truncation (to enforce the constraint that ancestral nodes are older than descendent nodes) interact to determine the time prior. In general, truncation has a great impact on calibrations so that the effective priors on the calibration node ages after the truncation can be very different from the user-specified calibration densities. The different strategies for generating the effective prior also had considerable impact, leading to very different marginal effective priors. Arbitrary parameters used to implement minimum-bound calibrations were found to have a strong impact upon the prior and posterior of the divergence times. Our results highlight the importance of inspecting the joint time prior used by the dating program before any Bayesian dating analysis. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Phantom experiments using soft-prior regularization EIT for breast cancer imaging.

    PubMed

    Murphy, Ethan K; Mahara, Aditya; Wu, Xiaotian; Halter, Ryan J

    2017-06-01

    A soft-prior regularization (SR) electrical impedance tomography (EIT) technique for breast cancer imaging is described, which shows an ability to accurately reconstruct tumor/inclusion conductivity values within a dense breast model investigated using a cylindrical and a breast-shaped tank. The SR-EIT method relies on knowing the spatial location of a suspicious lesion initially detected from a second imaging modality. Standard approaches (using Laplace smoothing and total variation regularization) without prior structural information are unable to accurately reconstruct or detect the tumors. The soft-prior approach represents a very significant improvement to these standard approaches, and has the potential to improve conventional imaging techniques, such as automated whole breast ultrasound (AWB-US), by providing electrical property information of suspicious lesions to improve AWB-US's ability to discriminate benign from cancerous lesions. Specifically, the best soft-regularization technique found average absolute tumor/inclusion errors of 0.015 S m -1 for the cylindrical test and 0.055 S m -1 and 0.080 S m -1 for the breast-shaped tank for 1.8 cm and 2.5 cm inclusions, respectively. The standard approaches were statistically unable to distinguish the tumor from the mammary gland tissue. An analysis of false tumors (benign suspicious lesions) provides extra insight into the potential and challenges EIT has for providing clinically relevant information. The ability to obtain accurate conductivity values of a suspicious lesion (>1.8 cm) detected from another modality (e.g. AWB-US) could significantly reduce false positives and result in a clinically important technology.

  5. Cosmology with weak lensing surveys.

    PubMed

    Munshi, Dipak; Valageas, Patrick

    2005-12-15

    Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening mass. Since the lensing effects arise from deflections of the light rays due to fluctuations of the gravitational potential, they can be directly related to the underlying density field of the large-scale structures. Weak gravitational surveys are complementary to both galaxy surveys and cosmic microwave background observations as they probe unbiased nonlinear matter power spectra at medium redshift. Ongoing CMBR experiments such as WMAP and a future Planck satellite mission will measure the standard cosmological parameters with unprecedented accuracy. The focus of attention will then shift to understanding the nature of dark matter and vacuum energy: several recent studies suggest that lensing is the best method for constraining the dark energy equation of state. During the next 5 year period, ongoing and future weak lensing surveys such as the Joint Dark Energy Mission (JDEM; e.g. SNAP) or the Large-aperture Synoptic Survey Telescope will play a major role in advancing our understanding of the universe in this direction. In this review article, we describe various aspects of probing the matter power spectrum and the bi-spectrum and other related statistics with weak lensing surveys. This can be used to probe the background dynamics of the universe as well as the nature of dark matter and dark energy.

  6. Weak gauge boson radiation in parton showers

    NASA Astrophysics Data System (ADS)

    Christiansen, Jesper R.; Sjöstrand, Torbjörn

    2014-04-01

    The emission of W and Z gauge bosons off quarks is included in a traditional QCD + QED shower. The unitarity of the shower algorithm links the real radiation of the weak gauge bosons to the negative weak virtual corrections. The shower evolution process leads to a competition between QCD, QED and weak radiation, and allows for W and Z boson production inside jets. Various effects on LHC physics are studied, both at low and high transverse momenta, and effects at higher-energy hadron colliders are outlined.

  7. Dynamos driven by weak thermal convection and heterogeneous outer boundary heat flux

    NASA Astrophysics Data System (ADS)

    Sahoo, Swarandeep; Sreenivasan, Binod; Amit, Hagay

    2016-01-01

    We use numerical dynamo models with heterogeneous core-mantle boundary (CMB) heat flux to show that lower mantle lateral thermal variability may help support a dynamo under weak thermal convection. In our reference models with homogeneous CMB heat flux, convection is either marginally supercritical or absent, always below the threshold for dynamo onset. We find that lateral CMB heat flux variations organize the flow in the core into patterns that favour the growth of an early magnetic field. Heat flux patterns symmetric about the equator produce non-reversing magnetic fields, whereas anti-symmetric patterns produce polarity reversals. Our results may explain the existence of the geodynamo prior to inner core nucleation under a tight energy budget. Furthermore, in order to sustain a strong geomagnetic field, the lower mantle thermal distribution was likely dominantly symmetric about the equator.

  8. Cosmology and the weak interaction

    NASA Technical Reports Server (NTRS)

    Schramm, David N.

    1989-01-01

    The weak interaction plays a critical role in modern Big Bang cosmology. Two of its most publicized comological connections are emphasized: big bang nucleosynthesis and dark matter. The first of these is connected to the cosmological prediction of neutrine flavors, N(sub nu) is approximately 3 which in now being confirmed. The second is interrelated to the whole problem of galacty and structure formation in the universe. The role of the weak interaction both for dark matter candidates and for the problem of generating seeds to form structure is demonstrated.

  9. A Bayesian hierarchical model for mortality data from cluster-sampling household surveys in humanitarian crises.

    PubMed

    Heudtlass, Peter; Guha-Sapir, Debarati; Speybroeck, Niko

    2018-05-31

    The crude death rate (CDR) is one of the defining indicators of humanitarian emergencies. When data from vital registration systems are not available, it is common practice to estimate the CDR from household surveys with cluster-sampling design. However, sample sizes are often too small to compare mortality estimates to emergency thresholds, at least in a frequentist framework. Several authors have proposed Bayesian methods for health surveys in humanitarian crises. Here, we develop an approach specifically for mortality data and cluster-sampling surveys. We describe a Bayesian hierarchical Poisson-Gamma mixture model with generic (weakly informative) priors that could be used as default in absence of any specific prior knowledge, and compare Bayesian and frequentist CDR estimates using five different mortality datasets. We provide an interpretation of the Bayesian estimates in the context of an emergency threshold and demonstrate how to interpret parameters at the cluster level and ways in which informative priors can be introduced. With the same set of weakly informative priors, Bayesian CDR estimates are equivalent to frequentist estimates, for all practical purposes. The probability that the CDR surpasses the emergency threshold can be derived directly from the posterior of the mean of the mixing distribution. All observation in the datasets contribute to the estimation of cluster-level estimates, through the hierarchical structure of the model. In a context of sparse data, Bayesian mortality assessments have advantages over frequentist ones already when using only weakly informative priors. More informative priors offer a formal and transparent way of combining new data with existing data and expert knowledge and can help to improve decision-making in humanitarian crises by complementing frequentist estimates.

  10. Prior Knowledge Facilitates Mutual Gaze Convergence and Head Nodding Synchrony in Face-to-face Communication

    PubMed Central

    Thepsoonthorn, C.; Yokozuka, T.; Miura, S.; Ogawa, K.; Miyake, Y.

    2016-01-01

    As prior knowledge is claimed to be an essential key to achieve effective education, we are interested in exploring whether prior knowledge enhances communication effectiveness. To demonstrate the effects of prior knowledge, mutual gaze convergence and head nodding synchrony are observed as indicators of communication effectiveness. We conducted an experiment on lecture task between lecturer and student under 2 conditions: prior knowledge and non-prior knowledge. The students in prior knowledge condition were provided the basic information about the lecture content and were assessed their understanding by the experimenter before starting the lecture while the students in non-prior knowledge had none. The result shows that the interaction in prior knowledge condition establishes significantly higher mutual gaze convergence (t(15.03) = 6.72, p < 0.0001; α = 0.05, n = 20) and head nodding synchrony (t(16.67) = 1.83, p = 0.04; α = 0.05, n = 19) compared to non-prior knowledge condition. This study reveals that prior knowledge facilitates mutual gaze convergence and head nodding synchrony. Furthermore, the interaction with and without prior knowledge can be evaluated by measuring or observing mutual gaze convergence and head nodding synchrony. PMID:27910902

  11. Prior Knowledge Facilitates Mutual Gaze Convergence and Head Nodding Synchrony in Face-to-face Communication.

    PubMed

    Thepsoonthorn, C; Yokozuka, T; Miura, S; Ogawa, K; Miyake, Y

    2016-12-02

    As prior knowledge is claimed to be an essential key to achieve effective education, we are interested in exploring whether prior knowledge enhances communication effectiveness. To demonstrate the effects of prior knowledge, mutual gaze convergence and head nodding synchrony are observed as indicators of communication effectiveness. We conducted an experiment on lecture task between lecturer and student under 2 conditions: prior knowledge and non-prior knowledge. The students in prior knowledge condition were provided the basic information about the lecture content and were assessed their understanding by the experimenter before starting the lecture while the students in non-prior knowledge had none. The result shows that the interaction in prior knowledge condition establishes significantly higher mutual gaze convergence (t(15.03) = 6.72, p < 0.0001; α = 0.05, n = 20) and head nodding synchrony (t(16.67) = 1.83, p = 0.04; α = 0.05, n = 19) compared to non-prior knowledge condition. This study reveals that prior knowledge facilitates mutual gaze convergence and head nodding synchrony. Furthermore, the interaction with and without prior knowledge can be evaluated by measuring or observing mutual gaze convergence and head nodding synchrony.

  12. The KP Approximation Under a Weak Coriolis Forcing

    NASA Astrophysics Data System (ADS)

    Melinand, Benjamin

    2018-02-01

    In this paper, we study the asymptotic behavior of weakly transverse water-waves under a weak Coriolis forcing in the long wave regime. We derive the Boussinesq-Coriolis equations in this setting and we provide a rigorous justification of this model. Then, from these equations, we derive two other asymptotic models. When the Coriolis forcing is weak, we fully justify the rotation-modified Kadomtsev-Petviashvili equation (also called Grimshaw-Melville equation). When the Coriolis forcing is very weak, we rigorously justify the Kadomtsev-Petviashvili equation. This work provides the first mathematical justification of the KP approximation under a Coriolis forcing.

  13. MRAC Control with Prior Model Knowledge for Asymmetric Damaged Aircraft

    PubMed Central

    Zhang, Jing

    2015-01-01

    This paper develops a novel state-tracking multivariable model reference adaptive control (MRAC) technique utilizing prior knowledge of plant models to recover control performance of an asymmetric structural damaged aircraft. A modification of linear model representation is given. With prior knowledge on structural damage, a polytope linear parameter varying (LPV) model is derived to cover all concerned damage conditions. An MRAC method is developed for the polytope model, of which the stability and asymptotic error convergence are theoretically proved. The proposed technique reduces the number of parameters to be adapted and thus decreases computational cost and requires less input information. The method is validated by simulations on NASA generic transport model (GTM) with damage. PMID:26180839

  14. Amplification of Angular Rotations Using Weak Measurements

    NASA Astrophysics Data System (ADS)

    Magaña-Loaiza, Omar S.; Mirhosseini, Mohammad; Rodenburg, Brandon; Boyd, Robert W.

    2014-05-01

    We present a weak measurement protocol that permits a sensitive estimation of angular rotations based on the concept of weak-value amplification. The shift in the state of a pointer, in both angular position and the conjugate orbital angular momentum bases, is used to estimate angular rotations. This is done by an amplification of both the real and imaginary parts of the weak-value of a polarization operator that has been coupled to the pointer, which is a spatial mode, via a spin-orbit coupling. Our experiment demonstrates the first realization of weak-value amplification in the azimuthal degree of freedom. We have achieved effective amplification factors as large as 100, providing a sensitivity that is on par with more complicated methods that employ quantum states of light or extremely large values of orbital angular momentum.

  15. Estimation of Logistic Regression Models in Small Samples. A Simulation Study Using a Weakly Informative Default Prior Distribution

    ERIC Educational Resources Information Center

    Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel

    2012-01-01

    In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…

  16. Low dose CBCT reconstruction via prior contour based total variation (PCTV) regularization: a feasibility study

    NASA Astrophysics Data System (ADS)

    Chen, Yingxuan; Yin, Fang-Fang; Zhang, Yawei; Zhang, You; Ren, Lei

    2018-04-01

    Purpose: compressed sensing reconstruction using total variation (TV) tends to over-smooth the edge information by uniformly penalizing the image gradient. The goal of this study is to develop a novel prior contour based TV (PCTV) method to enhance the edge information in compressed sensing reconstruction for CBCT. Methods: the edge information is extracted from prior planning-CT via edge detection. Prior CT is first registered with on-board CBCT reconstructed with TV method through rigid or deformable registration. The edge contours in prior-CT is then mapped to CBCT and used as the weight map for TV regularization to enhance edge information in CBCT reconstruction. The PCTV method was evaluated using extended-cardiac-torso (XCAT) phantom, physical CatPhan phantom and brain patient data. Results were compared with both TV and edge preserving TV (EPTV) methods which are commonly used for limited projection CBCT reconstruction. Relative error was used to calculate pixel value difference and edge cross correlation was defined as the similarity of edge information between reconstructed images and ground truth in the quantitative evaluation. Results: compared to TV and EPTV, PCTV enhanced the edge information of bone, lung vessels and tumor in XCAT reconstruction and complex bony structures in brain patient CBCT. In XCAT study using 45 half-fan CBCT projections, compared with ground truth, relative errors were 1.5%, 0.7% and 0.3% and edge cross correlations were 0.66, 0.72 and 0.78 for TV, EPTV and PCTV, respectively. PCTV is more robust to the projection number reduction. Edge enhancement was reduced slightly with noisy projections but PCTV was still superior to other methods. PCTV can maintain resolution while reducing the noise in the low mAs CatPhan reconstruction. Low contrast edges were preserved better with PCTV compared with TV and EPTV. Conclusion: PCTV preserved edge information as well as reduced streak artifacts and noise in low dose CBCT reconstruction

  17. Testing the weak gravity-cosmic censorship connection

    NASA Astrophysics Data System (ADS)

    Crisford, Toby; Horowitz, Gary T.; Santos, Jorge E.

    2018-03-01

    A surprising connection between the weak gravity conjecture and cosmic censorship has recently been proposed. In particular, it was argued that a promising class of counterexamples to cosmic censorship in four-dimensional Einstein-Maxwell-Λ theory would be removed if charged particles (with sufficient charge) were present. We test this idea and find that indeed if the weak gravity conjecture is true, one cannot violate cosmic censorship this way. Remarkably, the minimum value of charge required to preserve cosmic censorship appears to agree precisely with that proposed by the weak gravity conjecture.

  18. Weak connections form an infinite number of patterns in the brain

    NASA Astrophysics Data System (ADS)

    Ren, Hai-Peng; Bai, Chao; Baptista, Murilo S.; Grebogi, Celso

    2017-04-01

    Recently, much attention has been paid to interpreting the mechanisms for memory formation in terms of brain connectivity and dynamics. Within the plethora of collective states a complex network can exhibit, we show that the phenomenon of Collective Almost Synchronisation (CAS), which describes a state with an infinite number of patterns emerging in complex networks for weak coupling strengths, deserves special attention. We show that a simulated neuron network with neurons weakly connected does produce CAS patterns, and additionally produces an output that optimally model experimental electroencephalograph (EEG) signals. This work provides strong evidence that the brain operates locally in a CAS regime, allowing it to have an unlimited number of dynamical patterns, a state that could explain the enormous memory capacity of the brain, and that would give support to the idea that local clusters of neurons are sufficiently decorrelated to independently process information locally.

  19. Informing the market: the strengths and weaknesses of information in the British National Health Service.

    PubMed

    McKee, M; Chenet, L

    1997-06-01

    Many countries are experimenting with planned (or quasi-) markets to discover if they can efficiently deliver health care in keeping with societal objectives. This paper examines the information requirements of this approach. Information is necessary in order to compare the performance of providers, to support billing, and to monitor access to care. It should be accurate, unambiguous, and resistant to manipulation. We draw on a project to find out how information on hospitalisation could be used in contracting in the British National Health Service. We conclude that the existing British system fails to provide robust measures of how many patients are treated, for what conditions, and with what treatments. We identify some promising remedies, others that are more difficult, and some which may be impossible to implement in any planned market, given the uncertainty of clinical practice.

  20. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    NASA Astrophysics Data System (ADS)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  1. Neurobehavioral and neurometabolic (SPECT) correlates of paranormal information: involvement of the right hemisphere and its sensitivity to weak complex magnetic fields.

    PubMed

    Roll, W G; Persinger, M A; Webster, D L; Tiller, S G; Cook, C M

    2002-02-01

    Experiments were designed to help elucidate the neurophysiological correlates for the experiences reported by Sean Harribance. For most of his life he has routinely experienced "flashes of images" of objects that were hidden and of accurate personal information concerning people with whom he was not familiar. The specificity of details for target pictures of people was correlated positively with the proportion of occipital alpha activity. Results from a complete neuropsychological assessment, Single Photon Emission Computed Tomography (SPECT), and screening electroencephalography suggested that his experiences were associated with increased activity within the parietal lobe and occipital regions of the right hemisphere. Sensed presences (subjectively localized to his left side) were evoked when weak, magnetic fields, whose temporal structure simulated long-term potentiation in the hippocampus, were applied over his right temporoparietal lobes. These results suggest that the phenomena attributed to paranormal or "extrasensory" processes are correlated quantitatively with morphological and functional anomalies involving the right parietotemporal cortices (or its thalamic inputs) and the hippocampal formation.

  2. Precision cosmology with weak gravitational lensing

    NASA Astrophysics Data System (ADS)

    Hearin, Andrew P.

    In recent years, cosmological science has developed a highly predictive model for the universe on large scales that is in quantitative agreement with a wide range of astronomical observations. While the number and diversity of successes of this model provide great confidence that our general picture of cosmology is correct, numerous puzzles remain. In this dissertation, I analyze the potential of planned and near future galaxy surveys to provide new understanding of several unanswered questions in cosmology, and address some of the leading challenges to this observational program. In particular, I study an emerging technique called cosmic shear, the weak gravitational lensing produced by large scale structure. I focus on developing strategies to optimally use the cosmic shear signal observed in galaxy imaging surveys to uncover the physics of dark energy and the early universe. In chapter 1 I give an overview of a few unsolved mysteries in cosmology and I motivate weak lensing as a cosmological probe. I discuss the use of weak lensing as a test of general relativity in chapter 2 and assess the threat to such tests presented by our uncertainty in the physics of galaxy formation. Interpreting the cosmic shear signal requires knowledge of the redshift distribution of the lensed galaxies. This redshift distribution will be significantly uncertain since it must be determined photometrically. In chapter 3 I investigate the influence of photometric redshift errors on our ability to constrain dark energy models with weak lensing. The ability to study dark energy with cosmic shear is also limited by the imprecision in our understanding of the physics of gravitational collapse. In chapter 4 I present the stringent calibration requirements on this source of uncertainty. I study the potential of weak lensing to resolve a debate over a long-standing anomaly in CMB measurements in chapter 5. Finally, in chapter 6 I summarize my findings and conclude with a brief discussion of my

  3. Integration of prior knowledge into dense image matching for video surveillance

    NASA Astrophysics Data System (ADS)

    Menze, M.; Heipke, C.

    2014-08-01

    Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.

  4. On a stochastic control method for weakly coupled linear systems. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kwong, R. H.

    1972-01-01

    The stochastic control of two weakly coupled linear systems with different controllers is considered. Each controller only makes measurements about his own system; no information about the other system is assumed to be available. Based on the noisy measurements, the controllers are to generate independently suitable control policies which minimize a quadratic cost functional. To account for the effects of weak coupling directly, an approximate model, which involves replacing the influence of one system on the other by a white noise process is proposed. Simple suboptimal control problem for calculating the covariances of these noises is solved using the matrix minimum principle. The overall system performance based on this scheme is analyzed as a function of the degree of intersystem coupling.

  5. Weak Value Amplification is Suboptimal for Estimation and Detection

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-01-01

    We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.

  6. Confidence set interference with a prior quadratic bound. [in geophysics

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1989-01-01

    Neyman's (1937) theory of confidence sets is developed as a replacement for Bayesian interference (BI) and stochastic inversion (SI) when the prior information is a hard quadratic bound. It is recommended that BI and SI be replaced by confidence set interference (CSI) only in certain circumstances. The geomagnetic problem is used to illustrate the general theory of CSI.

  7. Evaluating the “recovery level” of endangered species without prior information before alien invasion

    PubMed Central

    Watari, Yuya; Nishijima, Shota; Fukasawa, Marina; Yamada, Fumio; Abe, Shintaro; Miyashita, Tadashi

    2013-01-01

    For maintaining social and financial support for eradication programs of invasive species, quantitative assessment of recovery of native species or ecosystems is important because it provides a measurable parameter of success. However, setting a concrete goal for recovery is often difficult owing to lack of information prior to the introduction of invaders. Here, we present a novel approach to evaluate the achievement level of invasive predator management based on the carrying capacity of endangered species estimated using long-term monitoring data. In Amami-Oshima Island, Japan, where the eradication project of introduced small Indian mongoose is ongoing since 2000, we surveyed the population densities of four endangered species threatened by the mongoose (Amami rabbit, the Otton frog, Amami tip-nosed frog, and Amami Ishikawa's frog) at four time points ranging from 2003 to 2011. We estimated the carrying capacities of these species using the logistic growth model combined with the effects of mongoose predation and environmental heterogeneity. All species showed clear tendencies toward increasing their density in line with decreased mongoose density, and they exhibited density-dependent population growth. The estimated carrying capacities of three endangered species had small confidence intervals enough to measure recovery levels by the mongoose management. The population density of each endangered species has recovered to the level of the carrying capacity at about 20–40% of all sites, whereas no individuals were observed at more than 25% of all sites. We propose that the present approach involving appropriate monitoring data of native organism populations will be widely applicable to various eradication projects and provide unambiguous goals for management of invasive species. PMID:24363899

  8. A Solar Eruption from a Weak Magnetic Field Region with Relatively Strong Geo-Effectiveness

    NASA Astrophysics Data System (ADS)

    Wang, R.

    2017-12-01

    A moderate flare eruption giving rise to a series of geo-effectiveness on 2015 November 4 caught our attentions, which originated from a relatively weak magnetic field region. The associated characteristics near the Earth are presented, which indicates that the southward magnetic field in the sheath and the ICME induced a geomagnetic storm sequence with a Dst global minimum of 90 nT. The ICME is indicated to have a small inclination angle by using a Grad-Shafranov technique, and corresponds to the flux rope (FR) structure horizontally lying on the solar surface. A small-scale magnetic cancelling feature was detected which is beneath the FR and is co-aligned with the Atmospheric Imaging Assembly (AIA) EUV brightening prior to the eruption. Various magnetic features for space-weather forecasting are computed by using a data product from the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) called Space-weather HMI Active Region Patches (SHARPs), which help us identify the changes of the photospheric magnetic fields during the magnetic cancellation process and prove that the magnetic reconnection associated with the flux cancellation is driven by the magnetic shearing motion on the photosphere. An analysis on the distributions at different heights of decay index is carried out. Combining with a filament height estimation method, the configurations of the FR is identified and a decay index critical value n = 1 is considered to be more appropriate for such a weak magnetic field region. Through a comprehensive analysis to the trigger mechanisms and conditions of the eruption, a clearer scenario of a CME from a relatively weak region is presented.

  9. Stabilization of weak ferromagnetism by strong magnetic response to epitaxial strain in multiferroic BiFeO 3

    DOE PAGES

    Cooper, Valentino R.; Lee, Jun Hee; Krogel, Jaron T.; ...

    2015-08-06

    Multiferroic BiFeO 3 exhibits excellent magnetoelectric coupling critical for magnetic information processing with minimal power consumption. Thus, the degenerate nature of the easy spin axis in the (111) plane presents roadblocks for real world applications. Here, we explore the stabilization and switchability of the weak ferromagnetic moments under applied epitaxial strain using a combination of first-principles calculations and group-theoretic analyses. We demonstrate that the antiferromagnetic moment vector can be stabilized along unique crystallographic directions ([110] and [-110]) under compressive and tensile strains. A direct coupling between the anisotropic antiferrodistortive rotations and Dzyaloshinskii-Moria interactions drives the stabilization of weak ferromagnetism. Furthermore,more » energetically competing C- and G-type magnetic orderings are observed at high compressive strains, suggesting that it may be possible to switch the weak ferromagnetism on and off under application of strain. These findings emphasize the importance of strain and antiferrodistortive rotations as routes to enhancing induced weak ferromagnetism in multiferroic oxides.« less

  10. Theoretical implementation of prior knowledge in the design of a multi-scale prosthesis satisfaction questionnaire.

    PubMed

    Schürmann, Tim; Beckerle, Philipp; Preller, Julia; Vogt, Joachim; Christ, Oliver

    2016-12-19

    In product development for lower limb prosthetic devices, a set of special criteria needs to be met. Prosthetic devices have a direct impact on the rehabilitation process after an amputation with both perceived technological and psychological aspects playing an important role. However, available psychometric questionnaires fail to consider the important links between these two dimensions. In this article a probabilistic latent trait model is proposed with seven technical and psychological factors which measure satisfaction with the prosthesis. The results of a first study are used to determine the basic parameters of the statistical model. These distributions represent hypotheses about factor loadings between manifest items and latent factors of the proposed psychometric questionnaire. A study was conducted and analyzed to form hypotheses for the prior distributions of the questionnaire's measurement model. An expert agreement study conducted on 22 experts was used to determine the prior distribution of item-factor loadings in the model. Model parameters that had to be specified as part of the measurement model were informed prior distributions on the item-factor loadings. For the current 70 items in the questionnaire, each factor loading was set to represent the certainty with which experts had assigned the items to their respective factors. Considering only the measurement model and not the structural model of the questionnaire, 70 out of 217 informed prior distributions on parameters were set. The use of preliminary studies to set prior distributions in latent trait models, while being a relatively new approach in psychological research, provides helpful information towards the design of a seven factor questionnaire that means to identify relations between technical and psychological factors in prosthetic product design and rehabilitation medicine.

  11. On selecting a prior for the precision parameter of Dirichlet process mixture models

    USGS Publications Warehouse

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  12. Weak lensing of the Lyman α forest

    NASA Astrophysics Data System (ADS)

    Croft, Rupert A. C.; Romeo, Alessandro; Metcalf, R. Benton

    2018-06-01

    The angular positions of quasars are deflected by the gravitational lensing effect of foreground matter. The Lyman α (Lyα) forest seen in the spectra of these quasars is therefore also lensed. We propose that the signature of weak gravitational lensing of the Lyα forest could be measured using similar techniques that have been applied to the lensed cosmic microwave background (CMB), and which have also been proposed for application to spectral data from 21-cm radio telescopes. As with 21-cm data, the forest has the advantage of spectral information, potentially yielding many lensed `slices' at different redshifts. We perform an illustrative idealized test, generating a high-resolution angular grid of quasars (of order arcminute separation), and lensing the Lyα forest spectra at redshifts z = 2-3 using a foreground density field. We find that standard quadratic estimators can be used to reconstruct images of the foreground mass distribution at z ˜ 1. There currently exists a wealth of Lyα forest data from quasar and galaxy spectral surveys, with smaller sightline separations expected in the future. Lyα forest lensing is sensitive to the foreground mass distribution at redshifts intermediate between CMB lensing and galaxy shear, and avoids the difficulties of shape measurement associated with the latter. With further refinement and application of mass reconstruction techniques, weak gravitational lensing of the high-redshift Lyα forest may become a useful new cosmological probe.

  13. Importance of weak minerals on earthquake mechanics

    NASA Astrophysics Data System (ADS)

    Kaneki, S.; Hirono, T.

    2017-12-01

    The role of weak minerals such as smectite and talc on earthquake mechanics is one of the important issues, and has been debated for recent several decades. Traditionally weak minerals in fault have been reported to weaken fault strength causing from its low frictional resistance. Furthermore, velocity-strengthening behavior of such weak mineral (talc) is considered to responsible for fault creep (aseismic slip) in the San Andreas fault. In contrast, recent studies reported that large amount of weak smectite in the Japan Trench could facilitate gigantic seismic slip during the 2011 Tohoku-oki earthquake. To investigate the role of weak minerals on rupture propagation process and magnitude of slip, we focus on the frictional properties of carbonaceous materials (CMs), which is the representative weak materials widely distributed in and around the convergent boundaries. Field observation and geochemical analyses revealed that graphitized CMs-layer is distributed along the slip surface of a fossil plate-subduction fault. Laboratory friction experiments demonstrated that pure quartz, bulk mixtures with bituminous coal (1 wt.%), and quartz with layered coal samples exhibited almost similar frictional properties (initial, yield, and dynamic friction). However, mixtures of quartz (99 wt.%) and layered graphite (1 wt.%) showed significantly lower initial and yield friction coefficient (0.31 and 0.50, respectively). Furthermore, the stress ratio S, defined as (yield stress-initial stress)/(initial stress-dynamic stress), increased in layered graphite samples (1.97) compared to quartz samples (0.14). Similar trend was observed in smectite-rich fault gouge. By referring the reported results of dynamic rupture propagation simulation using S ratio of 1.4 (typical value for the Japan Trench) and 2.0 (this study), we confirmed that higher S ratio results in smaller slip distance by approximately 20 %. On the basis of these results, we could conclude that weak minerals have lower

  14. Disentangling weak coherence and executive dysfunction: planning drawing in autism and attention-deficit/hyperactivity disorder.

    PubMed Central

    Booth, Rhonda; Charlton, Rebecca; Hughes, Claire; Happé, Francesca

    2003-01-01

    A tendency to focus on details at the expense of configural information, 'weak coherence', has been proposed as a cognitive style in autism. In the present study we tested whether weak coherence might be the result of executive dysfunction, by testing clinical groups known to show deficits on tests of executive control. Boys with autism spectrum disorders (ASD) were compared with age- and intelligence quotient (IQ)-matched boys with attention-deficit/hyperactivity disorder (ADHD), and typically developing (TD) boys, on a drawing task requiring planning for the inclusion of a new element. Weak coherence was measured through analysis of drawing style. In line with the predictions made, the ASD group was more detail-focused in their drawings than were either ADHD or TD boys. The ASD and ADHD groups both showed planning impairments, which were more severe in the former group. Poor planning did not, however, predict detail-focus, and scores on the two aspects of the task were unrelated in the clinical groups. These findings indicate that weak coherence may indeed be a cognitive style specific to autism and unrelated to cognitive deficits in frontal functions. PMID:12639335

  15. Mass Mapping Abell 2261 with Kinematic Weak Lensing: A Pilot Study for NASAs WFIRST mission

    NASA Astrophysics Data System (ADS)

    Eifler, Tim

    2015-02-01

    We propose to investigate a new method to extract cosmological information from weak gravitational lensing in the context of the mission design and requirements of NASAs Wide-Field Infrared Survey Telescope (WFIRST). In a recent paper (Huff, Krause, Eifler, George, Schlegel 2013) we describe a new method for reducing the shape noise in weak lensing measurements by an order of magnitude. Our method relies on spectroscopic measurements of disk galaxy rotation and makes use of the well-established Tully-Fisher (TF) relation in order to control for the intrinsic orientations of galaxy disks. Whereas shape noise is one of the major limitations for current weak lensing experiments it ceases to be an important source of statistical error in our new proposed technique. Specifically, we propose a pilot study that maps the projected mass distribution in the massive cluster Abell 2261 (z=0.225) to infer whether this promising technique faces systematics that prohibit its application to WFIRST. In addition to the cosmological weak lensing prospects, these measurements will also allow us to test kinematic lensing in the context of cluster mass reconstruction with a drastically improved signal-to-noise (S/N) per galaxy.

  16. Characterization of in situ oil shale retorts prior to ignition

    DOEpatents

    Turner, Thomas F.; Moore, Dennis F.

    1984-01-01

    Method and system for characterizing a vertical modified in situ oil shale retort prior to ignition of the retort. The retort is formed by mining a void at the bottom of a proposed retort in an oil shale deposit. The deposit is then sequentially blasted into the void to form a plurality of layers of rubble. A plurality of units each including a tracer gas cannister are installed at the upper level of each rubble layer prior to blasting to form the next layer. Each of the units includes a receiver that is responsive to a coded electromagnetic (EM) signal to release gas from the associated cannister into the rubble. Coded EM signals are transmitted to the receivers to selectively release gas from the cannisters. The released gas flows through the retort to an outlet line connected to the floor of the retort. The time of arrival of the gas at a detector unit in the outlet line relative to the time of release of gas from the cannisters is monitored. This information enables the retort to be characterized prior to ignition.

  17. Security of BB84 with weak randomness and imperfect qubit encoding

    NASA Astrophysics Data System (ADS)

    Zhao, Liang-Yuan; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Fang, Xi; Han, Zheng-Fu; Huang, Wei

    2018-03-01

    The main threats for the well-known Bennett-Brassard 1984 (BB84) practical quantum key distribution (QKD) systems are that its encoding is inaccurate and measurement device may be vulnerable to particular attacks. Thus, a general physical model or security proof to tackle these loopholes simultaneously and quantitatively is highly desired. Here we give a framework on the security of BB84 when imperfect qubit encoding and vulnerability of measurement device are both considered. In our analysis, the potential attacks to measurement device are generalized by the recently proposed weak randomness model which assumes the input random numbers are partially biased depending on a hidden variable planted by an eavesdropper. And the inevitable encoding inaccuracy is also introduced here. From a fundamental view, our work reveals the potential information leakage due to encoding inaccuracy and weak randomness input. For applications, our result can be viewed as a useful tool to quantitatively evaluate the security of a practical QKD system.

  18. REPRESENTATIONS OF WEAK AND STRONG INTEGRALS IN BANACH SPACES

    PubMed Central

    Brooks, James K.

    1969-01-01

    We establish a representation of the Gelfand-Pettis (weak) integral in terms of unconditionally convergent series. Moreover, absolute convergence of the series is a necessary and sufficient condition in order that the weak integral coincide with the Bochner integral. Two applications of the representation are given. The first is a simplified proof of the countable additivity and absolute continuity of the indefinite weak integral. The second application is to probability theory; we characterize the conditional expectation of a weakly integrable function. PMID:16591755

  19. Information and Entropy

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2007-11-01

    What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.

  20. Systematization of published research graphics characterizing weakly bound molecular complexes with carbon dioxide

    NASA Astrophysics Data System (ADS)

    Lavrentiev, N. A.; Rodimova, O. B.; Fazliev, A. Z.; Vigasin, A. A.

    2017-11-01

    An approach is suggested to the formation of applied ontologies in subject domains where results are represented in graphical form. An approach to systematization of research graphics is also given which contains information on weakly bound carbon dioxide complexes. The results of systematization of research plots and images that characterize the spectral properties of the CO2 complexes are presented.

  1. CP Violation, Neutral Currents, and Weak Equivalence

    DOE R&D Accomplishments Database

    Fitch, V. L.

    1972-03-23

    Within the past few months two excellent summaries of the state of our knowledge of the weak interactions have been presented. Correspondingly, we will not attempt a comprehensive review but instead concentrate this discussion on the status of CP violation, the question of the neutral currents, and the weak equivalence principle.

  2. Weak value amplification considered harmful

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-03-01

    We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.

  3. Weak Gravitational Lensing by the Nearby Cluster Abell 3667.

    PubMed

    Joffre; Fischer; Frieman; McKay; Mohr; Nichol; Johnston; Sheldon; Bernstein

    2000-05-10

    We present two weak lensing reconstructions of the nearby (zcl=0.055) merging cluster Abell 3667, based on observations taken approximately 1 yr apart under different seeing conditions. This is the lowest redshift cluster with a weak lensing mass reconstruction to date. The reproducibility of features in the two mass maps demonstrates that weak lensing studies of low-redshift clusters are feasible. These data constitute the first results from an X-ray luminosity-selected weak lensing survey of 19 low-redshift (z<0.1) southern clusters.

  4. Substellar Companions to weak-line TTauri Stars

    NASA Astrophysics Data System (ADS)

    Brandner, W.; Alcala, J. M.; Covino, E.; Frink, S.

    1997-05-01

    Weak-line TTauri stars, contrary to classical TTauri stars, no longer possess massive circumstellar disks. In weak-line TTauri stars, the circumstellar matter was either accreted onto the TTauri star or has been redistributed. Disk instabilities in the outer disk might result in the formation of brown dwarfs and giant planets. Based on photometric and spectroscopic studies of ROSAT sources, we have selected an initial sample of 200 weak-line TTauri stars in the Chamaeleon T association and the Scorpius Centaurus OB association. In the course of follow-up observations we identified visual and spectroscopic binary stars and excluded them from our final list, as the complex dynamics and gravitational interaction in binary systems might aggravate or even completely inhibit the formation of planets (depending on physical separation of the binary components and their mass-ratio). The membership of individual stars to the associations was established from proper motion studies and radial velocity surveys. Our final sample consists of 70 single weak-line TTauri stars. We have initiated a program to spatially RESOLVE young brown dwarfs and young giant planets as companions to single weak-line TTauri stars using adaptive optics at the ESO 3.6m telescope and HST/NICMOS. In this poster we describe the observing strategy and present first results of our adaptive optics observations.

  5. Part I. Student success in intensive versus traditional introductory chemistry courses. Part II. Synthesis of salts of the weakly coordinating trisphat anion

    NASA Astrophysics Data System (ADS)

    Hall, Mildred V.

    Part I. Intensive courses have been shown to be associated with equal or greater student success than traditional-length courses in a wide variety of disciplines and education levels. Student records from intensive and traditional-length introductory general chemistry courses were analyzed to determine the effects, of the course format, the level of academic experience, life experience (age), GPA, academic major and gender on student success in the course. Pretest scores, GPA and ACT composite scores were used as measures of academic ability and prior knowledge; t-tests comparing the means of these variables were used to establish that the populations were comparable prior to the course. Final exam scores, total course points and pretest-posttest differences were used as measures of student success; t-tests were used to determine if differences existed between the populations. ANCOVA analyses revealed that student GPA, pretest scores and course format were the only variables tested that were significant in accounting for the variance of the academic success measures. In general, the results indicate that students achieved greater academic success in the intensive-format course, regardless of the level of academic experience, life experience, academic major or gender. Part II. Weakly coordinating anions have many important applications, one of which is to function as co-catalysts in the polymerization of olefins by zirconocene. The structure of tris(tetrachlorobenzenedialato) phosphate(V) or "trisphat" anion suggests that it might be an outstanding example of a weakly coordinating anion. Trisphat acid was synthesized and immediately used to prepare the stable tributylammonium trisphat, which was further reacted to produce trisphat salts of Group I metal cations in high yields. Results of the 35Cl NQR analysis of these trisphat salts indicate only very weak coordination between the metal cations and the chlorine atoms of the trisphat anion.

  6. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  7. Entanglement and Weak Values: A Quantum Miracle Cookbook

    NASA Astrophysics Data System (ADS)

    Botero, Alonso

    The concept of the weak value has proved to be a powerful and operationally grounded framework for the assignment of physical properties to a quantum system at any given time. More importantly, this framework has allowed us to identify a whole range of surprising quantum effects, or "miracles", which are readily testable but which lie buried "under the noise" when the results of measurements are not post-selected. In all cases, these miracles have to do with the fact that weak values can take values lying outside the conventional ranges of quantum expectation values. We explore the extent to which such miracles are possible within the weak value framework. As we show, given appropriate initial and final states, it is generally possible to produce any set of weak values that is consistent with the linearity of weak values, provided that the states are entangled states of the system with some external ancillary system. Through a simple constructive proof, we obtain a recipe for arbitrary quantum miracles, and give examples of some interesting applications. In particular, we show how the classical description of an infinitely-localized point in phase-space is contained in the weak-value framework augmented by quantum entanglement. [Editor's note: for a video of the talk given by Prof. Botero at the Aharonov-80 conference in 2012 at Chapman University, see http://quantum.chapman.edu/talk-27.

  8. The Effect of Weak Resistivity and Weak Thermal Diffusion on Short-wavelength Magnetic Buoyancy Instability

    NASA Astrophysics Data System (ADS)

    Gradzki, Marek J.; Mizerski, Krzysztof A.

    2018-03-01

    Magnetic buoyancy instability in weakly resistive and thermally conductive plasma is an important mechanism of magnetic field expulsion in astrophysical systems. It is often invoked, e.g., in the context of the solar interior. Here, we revisit a problem introduc`ed by Gilman: the short-wavelength linear stability of a plane layer of compressible isothermal and weakly diffusive fluid permeated by a horizontal magnetic field of strength decreasing with height. In this physical setting, we investigate the effect of weak resistivity and weak thermal conductivity on the short-wavelength perturbations, localized in the vertical direction, and show that the presence of diffusion allows to establish the wavelength of the most unstable mode, undetermined in an ideal fluid. When diffusive effects are neglected, the perturbations are amplified at a rate that monotonically increases as the wavelength tends to zero. We demonstrate that, when the resistivity and thermal conduction are introduced, the wavelength of the most unstable perturbation is established and its scaling law with the diffusion parameters depends on gradients of the mean magnetic field, temperature, and density. Three main dynamical regimes are identified, with the wavelength of the most unstable mode scaling as either λ /d∼ {{ \\mathcal U }}κ 3/5 or λ /d∼ {{ \\mathcal U }}κ 3/4 or λ /d∼ {{ \\mathcal U }}κ 1/3, where d is the layer thickness and {{ \\mathcal U }}κ is the ratio of the characteristic thermal diffusion velocity scale to the free-fall velocity. Our analytic results are backed up by a series of numerical solutions. The two-dimensional interchange modes are shown to dominate over three-dimensional ones when the magnetic field and/or temperature gradients are strong enough.

  9. Strength functions, entropies, and duality in weakly to strongly interacting fermionic systems.

    PubMed

    Angom, D; Ghosh, S; Kota, V K B

    2004-01-01

    We revisit statistical wave function properties of finite systems of interacting fermions in the light of strength functions and their participation ratio and information entropy. For weakly interacting fermions in a mean-field with random two-body interactions of increasing strength lambda, the strength functions F(k) (E) are well known to change, in the regime where level fluctuations follow Wigner's surmise, from Breit-Wigner to Gaussian form. We propose an ansatz for the function describing this transition which we use to investigate the participation ratio xi(2) and the information entropy S(info) during this crossover, thereby extending the known behavior valid in the Gaussian domain into much of the Breit-Wigner domain. Our method also allows us to derive the scaling law lambda(d) approximately 1/sqrt[m] ( m is number of fermions) for the duality point lambda= lambda(d), where F(k) (E), xi(2), and S(info) in both the weak ( lambda=0 ) and strong mixing ( lambda= infinity ) basis coincide. As an application, the ansatz function for strength functions is used in describing the Breit-Wigner to Gaussian transition seen in neutral atoms CeI to SmI with valence electrons changing from 4 to 8.

  10. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    PubMed

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  11. Line Assignments and Position Measurements in Several Weak CO2 Bands between 4590 /cm and 7930/ cm

    NASA Technical Reports Server (NTRS)

    Giver, L. P.; Kshirsagar, R. J.; Freedman, R. C.; Chackerian, C.; Wattson, R. B.

    1998-01-01

    A substantial set of CO2 spectra from 4500 to 12000 /cm has been obtained at Ames with 1500 m path length using a Bomem DA8 FTS. The signal/noise was improved compared to prior spectra obtained in this laboratory by including a filter wheel limiting the band-pass of each spectrum to several hundred/cm. We have measured positions of lines in several weak bands not previously resolved in laboratory spectra. Using our positions and assignments of lines of the Q branch of the 31103-00001 vibrational band at 4591/cm, we have re-determined the rotational constants for the 31103f levels. Q-branch lines of this band were previously observed, but misassigned, in Venus spectra by Mandin. The current HITRAN values of the rotational constants for this level are incorrect due to the Q-branch misassignments. Our prior measurements of the 21122-00001 vibrational band at 7901/cm were limited to Q- and R-branch lines; with the improved signal/noise of these new spectra we have now measured lines in the weaker P branch.

  12. The influence of baseline marijuana use on treatment of cocaine dependence: application of an informative-priors bayesian approach.

    PubMed

    Green, Charles; Schmitz, Joy; Lindsay, Jan; Pedroza, Claudia; Lane, Scott; Agnelli, Rob; Kjome, Kimberley; Moeller, F Gerard

    2012-01-01

    Marijuana use is prevalent among patients with cocaine dependence and often non-exclusionary in clinical trials of potential cocaine medications. The dual-focus of this study was to (1) examine the moderating effect of baseline marijuana use on response to treatment with levodopa/carbidopa for cocaine dependence; and (2) apply an informative-priors, Bayesian approach for estimating the probability of a subgroup-by-treatment interaction effect. A secondary data analysis of two previously published, double-blind, randomized controlled trials provided complete data for the historical (Study 1: N = 64 placebo), and current (Study 2: N = 113) data sets. Negative binomial regression evaluated Treatment Effectiveness Scores (TES) as a function of medication condition (levodopa/carbidopa, placebo), baseline marijuana use (days in past 30), and their interaction. Bayesian analysis indicated that there was a 96% chance that baseline marijuana use predicts differential response to treatment with levodopa/carbidopa. Simple effects indicated that among participants receiving levodopa/carbidopa the probability that baseline marijuana confers harm in terms of reducing TES was 0.981; whereas the probability that marijuana confers harm within the placebo condition was 0.163. For every additional day of marijuana use reported at baseline, participants in the levodopa/carbidopa condition demonstrated a 5.4% decrease in TES; while participants in the placebo condition demonstrated a 4.9% increase in TES. The potential moderating effect of marijuana on cocaine treatment response should be considered in future trial designs. Applying Bayesian subgroup analysis proved informative in characterizing this patient-treatment interaction effect.

  13. The Influence of Baseline Marijuana Use on Treatment of Cocaine Dependence: Application of an Informative-Priors Bayesian Approach

    PubMed Central

    Green, Charles; Schmitz, Joy; Lindsay, Jan; Pedroza, Claudia; Lane, Scott; Agnelli, Rob; Kjome, Kimberley; Moeller, F. Gerard

    2012-01-01

    Background: Marijuana use is prevalent among patients with cocaine dependence and often non-exclusionary in clinical trials of potential cocaine medications. The dual-focus of this study was to (1) examine the moderating effect of baseline marijuana use on response to treatment with levodopa/carbidopa for cocaine dependence; and (2) apply an informative-priors, Bayesian approach for estimating the probability of a subgroup-by-treatment interaction effect. Method: A secondary data analysis of two previously published, double-blind, randomized controlled trials provided complete data for the historical (Study 1: N = 64 placebo), and current (Study 2: N = 113) data sets. Negative binomial regression evaluated Treatment Effectiveness Scores (TES) as a function of medication condition (levodopa/carbidopa, placebo), baseline marijuana use (days in past 30), and their interaction. Results: Bayesian analysis indicated that there was a 96% chance that baseline marijuana use predicts differential response to treatment with levodopa/carbidopa. Simple effects indicated that among participants receiving levodopa/carbidopa the probability that baseline marijuana confers harm in terms of reducing TES was 0.981; whereas the probability that marijuana confers harm within the placebo condition was 0.163. For every additional day of marijuana use reported at baseline, participants in the levodopa/carbidopa condition demonstrated a 5.4% decrease in TES; while participants in the placebo condition demonstrated a 4.9% increase in TES. Conclusion: The potential moderating effect of marijuana on cocaine treatment response should be considered in future trial designs. Applying Bayesian subgroup analysis proved informative in characterizing this patient-treatment interaction effect. PMID:23115553

  14. Weak antilocalization of composite fermions in graphene

    NASA Astrophysics Data System (ADS)

    Laitinen, Antti; Kumar, Manohar; Hakonen, Pertti J.

    2018-02-01

    We demonstrate experimentally that composite fermions in monolayer graphene display weak antilocalization. Our experiments deal with fractional quantum Hall (FQH) states in high-mobility, suspended graphene Corbino disks in the vicinity of ν =1 /2 . We find a strong temperature dependence of conductivity σ away from half filling, which is consistent with the expected electron-electron interaction-induced gaps in the FQH state. At half filling, however, the temperature dependence of conductivity σ (T ) becomes quite weak, as anticipated for a Fermi sea of composite fermions, and we find a logarithmic dependence of σ on T . The sign of this quantum correction coincides with the weak antilocalization of graphene composite fermions, indigenous to chiral Dirac particles.

  15. Exploring patterns in resource utilization prior to the formal identification of homelessness in recently returned veterans.

    PubMed

    Gundlapalli, Adi V; Redd, Andrew; Carter, Marjorie E; Palmer, Miland; Peterson, Rachel; Samore, Matthew H

    2014-01-01

    There are limited data on resources utilized by US Veterans prior to their identification as being homeless. We performed visual analytics on longitudinal medical encounter data prior to the official recognition of homelessness in a large cohort of OEF/OIF Veterans. A statistically significant increase in numbers of several categories of visits in the immediate 30 days prior to the recognition of homelessness was noted as compared to an earlier period. This finding has the potential to inform prediction algorithms based on structured data with a view to intervention and mitigation of homelessness among Veterans.

  16. Non-negative Matrix Factorization for Self-calibration of Photometric Redshift Scatter in Weak-lensing Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Le; Yu, Yu; Zhang, Pengjie, E-mail: lezhang@sjtu.edu.cn

    Photo- z error is one of the major sources of systematics degrading the accuracy of weak-lensing cosmological inferences. Zhang et al. proposed a self-calibration method combining galaxy–galaxy correlations and galaxy–shear correlations between different photo- z bins. Fisher matrix analysis shows that it can determine the rate of photo- z outliers at a level of 0.01%–1% merely using photometric data and do not rely on any prior knowledge. In this paper, we develop a new algorithm to implement this method by solving a constrained nonlinear optimization problem arising in the self-calibration process. Based on the techniques of fixed-point iteration and non-negativemore » matrix factorization, the proposed algorithm can efficiently and robustly reconstruct the scattering probabilities between the true- z and photo- z bins. The algorithm has been tested extensively by applying it to mock data from simulated stage IV weak-lensing projects. We find that the algorithm provides a successful recovery of the scatter rates at the level of 0.01%–1%, and the true mean redshifts of photo- z bins at the level of 0.001, which may satisfy the requirements in future lensing surveys.« less

  17. [Coalition tactics of the weaks in the power struggle].

    PubMed

    Yamaguchi, H

    1991-02-01

    This study was intended to investigate the coalition tactics of the weaks under the situation where four players in the power relationship such as "A greater than B = C = D, A less than (B + C + D)" struggled for new resources of power. Subjects were 128 male undergraduates divided into 32 groups of four members each. The experimental design was 2 (determinants of power strength; resource size or rank order) x 2 (range of power distance between the strong and the weaks; large or small). As the result, it was revealed that the weaks preferred revolutional coalition "BCD" under the condition where the resource size determined the power strength, while preferred getting-ahead coalition "AB, AC, AD" under the condition where the rank order determined, and that expansion of power distance reinforced such tendency of the weaks. It was also shown, however, that the weaks did not always form the coalitions as they had hoped before bargaining. In conclusion, the necessity to examine the characteristics of the weaks' mentalities and behaviors in coalition bargaining was suggested.

  18. Phase slips in superconducting weak links

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimmel, Gregory; Glatz, Andreas; Aranson, Igor S.

    2017-01-01

    Superconducting vortices and phase slips are primary mechanisms of dissipation in superconducting, superfluid, and cold-atom systems. While the dynamics of vortices is fairly well described, phase slips occurring in quasi-one- dimensional superconducting wires still elude understanding. The main reason is that phase slips are strongly nonlinear time-dependent phenomena that cannot be cast in terms of small perturbations of the superconducting state. Here we study phase slips occurring in superconducting weak links. Thanks to partial suppression of superconductivity in weak links, we employ a weakly nonlinear approximation for dynamic phase slips. This approximation is not valid for homogeneous superconducting wires andmore » slabs. Using the numerical solution of the time-dependent Ginzburg-Landau equation and bifurcation analysis of stationary solutions, we show that the onset of phase slips occurs via an infinite period bifurcation, which is manifested in a specific voltage-current dependence. Our analytical results are in good agreement with simulations.« less

  19. Automatic protein structure solution from weak X-ray data

    NASA Astrophysics Data System (ADS)

    Skubák, Pavol; Pannu, Navraj S.

    2013-11-01

    Determining new protein structures from X-ray diffraction data at low resolution or with a weak anomalous signal is a difficult and often an impossible task. Here we propose a multivariate algorithm that simultaneously combines the structure determination steps. In tests on over 140 real data sets from the protein data bank, we show that this combined approach can automatically build models where current algorithms fail, including an anisotropically diffracting 3.88 Å RNA polymerase II data set. The method seamlessly automates the process, is ideal for non-specialists and provides a mathematical framework for successfully combining various sources of information in image processing.

  20. Overdamping by weakly coupled environments

    NASA Astrophysics Data System (ADS)

    Esposito, Massimiliano; Haake, Fritz

    2005-12-01

    A quantum system weakly interacting with a fast environment usually undergoes a relaxation with complex frequencies whose imaginary parts are damping rates quadratic in the coupling to the environment in accord with Fermi’s “golden rule.” We show for various models (spin damped by harmonic-oscillator or random-matrix baths, quantum diffusion, and quantum Brownian motion) that upon increasing the coupling up to a critical value still small enough to allow for weak-coupling Markovian master equations, a different relaxation regime can occur. In that regime, complex frequencies lose their real parts such that the process becomes overdamped. Our results call into question the standard belief that overdamping is exclusively a strong coupling feature.

  1. Order-Constrained Reference Priors with Implications for Bayesian Isotonic Regression, Analysis of Covariance and Spatial Models

    NASA Astrophysics Data System (ADS)

    Gong, Maozhen

    Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.

  2. Extraction of microseismic waveforms characteristics prior to rock burst using Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Li, Xuelong; Li, Zhonghui; Wang, Enyuan; Feng, Junjun; Chen, Liang; Li, Nan; Kong, Xiangguo

    2016-09-01

    This study provides a new research idea concerning rock burst prediction. The characteristics of microseismic (MS) waveforms prior to and during the rock burst were studied through the Hilbert-Huang transform (HHT). In order to demonstrate the advantage of the MS features extraction based on HHT, the conventional analysis method (Fourier transform) was also used to make a comparison. The results show that HHT is simple and reliable, and could extract in-depth information about the characteristics of MS waveforms. About 10 days prior to the rock burst, the main frequency of MS waveforms transforms from the high-frequency to low-frequency. What's more, the waveforms energy also presents accumulation characteristic. Based on our study results, it can be concluded that the MS signals analysis through HHT could provide valuable information about the coal or rock deformation and fracture.

  3. 6 CFR 5.46 - Procedure when response to demand is required prior to receiving instructions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Procedure when response to demand is required prior to receiving instructions. 5.46 Section 5.46 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY DISCLOSURE OF RECORDS AND INFORMATION Disclosure of Information in Litigation § 5...

  4. Receptive Field Inference with Localized Priors

    PubMed Central

    Park, Mijung; Pillow, Jonathan W.

    2011-01-01

    The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets. PMID:22046110

  5. Shape priors for segmentation of the cervix region within uterine cervix images

    NASA Astrophysics Data System (ADS)

    Lotenberg, Shelly; Gordon, Shiri; Greenspan, Hayit

    2008-03-01

    The work focuses on a unique medical repository of digital Uterine Cervix images ("Cervigrams") collected by the National Cancer Institute (NCI), National Institute of Health, in longitudinal multi-year studies. NCI together with the National Library of Medicine is developing a unique web-based database of the digitized cervix images to study the evolution of lesions related to cervical cancer. Tools are needed for the automated analysis of the cervigram content to support the cancer research. In recent works, a multi-stage automated system for segmenting and labeling regions of medical and anatomical interest within the cervigrams was developed. The current paper concentrates on incorporating prior-shape information in the cervix region segmentation task. In accordance with the fact that human experts mark the cervix region as circular or elliptical, two shape models (and corresponding methods) are suggested. The shape models are embedded within an active contour framework that relies on image features. Experiments indicate that incorporation of the prior shape information augments previous results.

  6. Weak interactions, omnivory and emergent food-web properties.

    PubMed

    Emmerson, Mark; Yearsley, Jon M

    2004-02-22

    Empirical studies have shown that, in real ecosystems, species-interaction strengths are generally skewed in their distribution towards weak interactions. Some theoretical work also suggests that weak interactions, especially in omnivorous links, are important for the local stability of a community at equilibrium. However, the majority of theoretical studies use uniform distributions of interaction strengths to generate artificial communities for study. We investigate the effects of the underlying interaction-strength distribution upon the return time, permanence and feasibility of simple Lotka-Volterra equilibrium communities. We show that a skew towards weak interactions promotes local and global stability only when omnivory is present. It is found that skewed interaction strengths are an emergent property of stable omnivorous communities, and that this skew towards weak interactions creates a dynamic constraint maintaining omnivory. Omnivory is more likely to occur when omnivorous interactions are skewed towards weak interactions. However, a skew towards weak interactions increases the return time to equilibrium, delays the recovery of ecosystems and hence decreases the stability of a community. When no skew is imposed, the set of stable omnivorous communities shows an emergent distribution of skewed interaction strengths. Our results apply to both local and global concepts of stability and are robust to the definition of a feasible community. These results are discussed in the light of empirical data and other theoretical studies, in conjunction with their broader implications for community assembly.

  7. Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines

    PubMed Central

    Mikut, Ralf

    2017-01-01

    Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID

  8. Spin Seebeck effect in a weak ferromagnet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arboleda, Juan David, E-mail: juan.arboledaj@udea.edu.co; Arnache Olmos, Oscar; Aguirre, Myriam Haydee

    2016-06-06

    We report the observation of room temperature spin Seebeck effect (SSE) in a weak ferromagnetic normal spinel Zinc Ferrite (ZFO). Despite the weak ferromagnetic behavior, the measurements of the SSE in ZFO show a thermoelectric voltage response comparable with the reported values for other ferromagnetic materials. Our results suggest that SSE might possibly originate from the surface magnetization of the ZFO.

  9. Prior Acute Mental Exertion in Exercise and Sport

    PubMed Central

    Silva-Júnior, Fernando Lopes e; Emanuel, Patrick; Sousa, Jordan; Silva, Matheus; Teixeira, Silmar; Pires, Flávio Oliveira; Machado, Sérgio; Arias-Carrion, Oscar

    2016-01-01

    Introduction: Mental exertion is a psychophysiological state caused by sustained and prolonged cognitive activity. The understanding of the possible effects of acute mental exertion on physical performance, and their physiological and psychological responses are of great importance for the performance of different occupations, such as military, construction workers, athletes (professional or recreational) or simply practicing regular exercise, since these occupations often combine physical and mental tasks while performing their activities. However, the effects of implementation of a cognitive task on responses to aerobic exercise and sports are poorly understood. Our narrative review aims to provide information on the current research related to the effects of prior acute mental fatigue on physical performance and their physiological and psychological responses associated with exercise and sports. Methods: The literature search was conducted using the databases PubMed, ISI Web of Knowledge and PsycInfo using the following terms and their combinations: “mental exertion”, “mental fatigue”, “mental fatigue and performance”, “mental exertion and sports” “mental exertion and exercise”. Results: We concluded that prior acute mental exertion affects effectively the physiological and psychophysiological responses during the cognitive task, and performance in exercise. Conclusion: Additional studies involving prior acute mental exertion, exercise/sports and physical performance still need to be carried out in order to analyze the physiological, psychophysiological and neurophysiological responses subsequently to acute mental exertion in order to identify cardiovascular factors, psychological, neuropsychological associates. PMID:27867415

  10. Precision phase estimation based on weak-value amplification

    NASA Astrophysics Data System (ADS)

    Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei

    2017-02-01

    In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.

  11. Weak measurements beyond the Aharonov-Albert-Vaidman formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu Shengjun; Li Yang

    2011-05-15

    We extend the idea of weak measurements to the general case, provide a complete treatment, and obtain results for both the regime when the preselected and postselected states (PPS) are almost orthogonal and the regime when they are exactly orthogonal. We surprisingly find that for a fixed interaction strength, there may exist a maximum signal amplification and a corresponding optimum overlap of PPS to achieve it. For weak measurements in the orthogonal regime, we find interesting quantities that play the same role that weak values play in the nonorthogonal regime.

  12. Biological effects due to weak magnetic field on plants

    NASA Astrophysics Data System (ADS)

    Belyavskaya, N. A.

    2004-01-01

    Throughout the evolution process, Earth's magnetic field (MF, about 50 μT) was a natural component of the environment for living organisms. Biological objects, flying on planned long-term interplanetary missions, would experience much weaker magnetic fields, since galactic MF is known to be 0.1-1 nT. However, the role of weak magnetic fields and their influence on functioning of biological organisms are still insufficiently understood, and is actively studied. Numerous experiments with seedlings of different plant species placed in weak magnetic field have shown that the growth of their primary roots is inhibited during early germination stages in comparison with control. The proliferative activity and cell reproduction in meristem of plant roots are reduced in weak magnetic field. Cell reproductive cycle slows down due to the expansion of G 1 phase in many plant species (and of G 2 phase in flax and lentil roots), while other phases of cell cycle remain relatively stabile. In plant cells exposed to weak magnetic field, the functional activity of genome at early pre-replicate period is shown to decrease. Weak magnetic field causes intensification of protein synthesis and disintegration in plant roots. At ultrastructural level, changes in distribution of condensed chromatin and nucleolus compactization in nuclei, noticeable accumulation of lipid bodies, development of a lytic compartment (vacuoles, cytosegresomes and paramural bodies), and reduction of phytoferritin in plastids in meristem cells were observed in pea roots exposed to weak magnetic field. Mitochondria were found to be very sensitive to weak magnetic field: their size and relative volume in cells increase, matrix becomes electron-transparent, and cristae reduce. Cytochemical studies indicate that cells of plant roots exposed to weak magnetic field show Ca 2+ over-saturation in all organelles and in cytoplasm unlike the control ones. The data presented suggest that prolonged exposures of plants to weak

  13. On Weak-BCC-Algebras

    PubMed Central

    Thomys, Janus; Zhang, Xiaohong

    2013-01-01

    We describe weak-BCC-algebras (also called BZ-algebras) in which the condition (x∗y)∗z = (x∗z)∗y is satisfied only in the case when elements x, y belong to the same branch. We also characterize ideals, nilradicals, and nilpotent elements of such algebras. PMID:24311983

  14. Axion monodromy and the weak gravity conjecture

    NASA Astrophysics Data System (ADS)

    Hebecker, Arthur; Rompineve, Fabrizio; Westphal, Alexander

    2016-04-01

    Axions with broken discrete shift symmetry (axion monodromy) have recently played a central role both in the discussion of inflation and the `relaxion' approach to the hierarchy problem. We suggest a very minimalist way to constrain such models by the weak gravity conjecture for domain walls: while the electric side of the conjecture is always satisfied if the cosine-oscillations of the axion potential are sufficiently small, the magnetic side imposes a cutoff, Λ3 ˜ mf M pl, independent of the height of these `wiggles'. We compare our approach with the recent related proposal by Ibanez, Montero, Uranga and Valenzuela. We also discuss the non-trivial question which version, if any, of the weak gravity conjecture for domain walls should hold. In particular, we show that string compactifications with branes of different dimensions wrapped on different cycles lead to a `geometric weak gravity conjecture' relating volumes of cycles, norms of corresponding forms and the volume of the compact space. Imposing this `geometric conjecture', e.g. on the basis of the more widely accepted weak gravity conjecture for particles, provides at least some support for the (electric and magnetic) conjecture for domain walls.

  15. Investigating the Effects of the Interaction Intensity in a Weak Measurement.

    PubMed

    Piacentini, Fabrizio; Avella, Alessio; Gramegna, Marco; Lussana, Rudi; Villa, Federica; Tosi, Alberto; Brida, Giorgio; Degiovanni, Ivo Pietro; Genovese, Marco

    2018-05-03

    Measurements are crucial in quantum mechanics, for fundamental research as well as for applicative fields like quantum metrology, quantum-enhanced measurements and other quantum technologies. In the recent years, weak-interaction-based protocols like Weak Measurements and Protective Measurements have been experimentally realized, showing peculiar features leading to surprising advantages in several different applications. In this work we analyze the validity range for such measurement protocols, that is, how the interaction strength affects the weak value extraction, by measuring different polarization weak values on heralded single photons. We show that, even in the weak interaction regime, the coupling intensity limits the range of weak values achievable, setting a threshold on the signal amplification effect exploited in many weak measurement based experiments.

  16. Ultra-weak sector, Higgs boson mass, and the dilaton

    DOE PAGES

    Allison, Kyle; Hill, Christopher T.; Ross, Graham G.

    2014-09-26

    The Higgs boson mass may arise from a portal coupling to a singlet fieldmore » $$\\sigma$$ which has a very large VEV $$f \\gg m_\\text{Higgs}$$. This requires a sector of "ultra-weak" couplings $$\\zeta_i$$, where $$\\zeta_i \\lesssim m_\\text{Higgs}^2 / f^2$$. Ultra-weak couplings are technically naturally small due to a custodial shift symmetry of $$\\sigma$$ in the $$\\zeta_i \\rightarrow 0$$ limit. The singlet field $$\\sigma$$ has properties similar to a pseudo-dilaton. We engineer explicit breaking of scale invariance in the ultra-weak sector via a Coleman-Weinberg potential, which requires hierarchies amongst the ultra-weak couplings.« less

  17. In Vivo Predictive Dissolution: Comparing the Effect of Bicarbonate and Phosphate Buffer on the Dissolution of Weak Acids and Weak Bases.

    PubMed

    Krieg, Brian J; Taghavi, Seyed Mohammad; Amidon, Gordon L; Amidon, Gregory E

    2015-09-01

    Bicarbonate is the main buffer in the small intestine and it is well known that buffer properties such as pKa can affect the dissolution rate of ionizable drugs. However, bicarbonate buffer is complicated to work with experimentally. Finding a suitable substitute for bicarbonate buffer may provide a way to perform more physiologically relevant dissolution tests. The dissolution of weak acid and weak base drugs was conducted in bicarbonate and phosphate buffer using rotating disk dissolution methodology. Experimental results were compared with the predicted results using the film model approach of (Mooney K, Mintun M, Himmelstein K, Stella V. 1981. J Pharm Sci 70(1):22-32) based on equilibrium assumptions as well as a model accounting for the slow hydration reaction, CO2 + H2 O → H2 CO3 . Assuming carbonic acid is irreversible in the dehydration direction: CO2 + H2 O ← H2 CO3 , the transport analysis can accurately predict rotating disk dissolution of weak acid and weak base drugs in bicarbonate buffer. The predictions show that matching the dissolution of weak acid and weak base drugs in phosphate and bicarbonate buffer is possible. The phosphate buffer concentration necessary to match physiologically relevant bicarbonate buffer [e.g., 10.5 mM (HCO3 (-) ), pH = 6.5] is typically in the range of 1-25 mM and is very dependent upon drug solubility and pKa . © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  18. Sleep Spindle Density Predicts the Effect of Prior Knowledge on Memory Consolidation

    PubMed Central

    Lambon Ralph, Matthew A.; Kempkes, Marleen; Cousins, James N.; Lewis, Penelope A.

    2016-01-01

    Information that relates to a prior knowledge schema is remembered better and consolidates more rapidly than information that does not. Another factor that influences memory consolidation is sleep and growing evidence suggests that sleep-related processing is important for integration with existing knowledge. Here, we perform an examination of how sleep-related mechanisms interact with schema-dependent memory advantage. Participants first established a schema over 2 weeks. Next, they encoded new facts, which were either related to the schema or completely unrelated. After a 24 h retention interval, including a night of sleep, which we monitored with polysomnography, participants encoded a second set of facts. Finally, memory for all facts was tested in a functional magnetic resonance imaging scanner. Behaviorally, sleep spindle density predicted an increase of the schema benefit to memory across the retention interval. Higher spindle densities were associated with reduced decay of schema-related memories. Functionally, spindle density predicted increased disengagement of the hippocampus across 24 h for schema-related memories only. Together, these results suggest that sleep spindle activity is associated with the effect of prior knowledge on memory consolidation. SIGNIFICANCE STATEMENT Episodic memories are gradually assimilated into long-term memory and this process is strongly influenced by sleep. The consolidation of new information is also influenced by its relationship to existing knowledge structures, or schemas, but the role of sleep in such schema-related consolidation is unknown. We show that sleep spindle density predicts the extent to which schemas influence the consolidation of related facts. This is the first evidence that sleep is associated with the interaction between prior knowledge and long-term memory formation. PMID:27030764

  19. The Importance of Prior Knowledge.

    ERIC Educational Resources Information Center

    Cleary, Linda Miller

    1989-01-01

    Recounts a college English teacher's experience of reading and rereading Noam Chomsky, building up a greater store of prior knowledge. Argues that Frank Smith provides a theory for the importance of prior knowledge and Chomsky's work provided a personal example with which to interpret and integrate that theory. (RS)

  20. Weak Defect Identification for Centrifugal Compressor Blade Crack Based on Pressure Sensors and Genetic Algorithm.

    PubMed

    Li, Hongkun; He, Changbo; Malekian, Reza; Li, Zhixiong

    2018-04-19

    The Centrifugal compressor is a piece of key equipment for petrochemical factories. As the core component of a compressor, the blades suffer periodic vibration and flow induced excitation mechanism, which will lead to the occurrence of crack defect. Moreover, the induced blade defect usually has a serious impact on the normal operation of compressors and the safety of operators. Therefore, an effective blade crack identification method is particularly important for the reliable operation of compressors. Conventional non-destructive testing and evaluation (NDT&E) methods can detect the blade defect effectively, however, the compressors should shut down during the testing process which is time-consuming and costly. In addition, it can be known these methods are not suitable for the long-term on-line condition monitoring and cannot identify the blade defect in time. Therefore, the effective on-line condition monitoring and weak defect identification method should be further studied and proposed. Considering the blade vibration information is difficult to measure directly, pressure sensors mounted on the casing are used to sample airflow pressure pulsation signal on-line near the rotating impeller for the purpose of monitoring the blade condition indirectly in this paper. A big problem is that the blade abnormal vibration amplitude induced by the crack is always small and this feature information will be much weaker in the pressure signal. Therefore, it is usually difficult to identify blade defect characteristic frequency embedded in pressure pulsation signal by general signal processing methods due to the weakness of the feature information and the interference of strong noise. In this paper, continuous wavelet transform (CWT) is used to pre-process the sampled signal first. Then, the method of bistable stochastic resonance (SR) based on Woods-Saxon and Gaussian (WSG) potential is applied to enhance the weak characteristic frequency contained in the pressure

  1. Weak Defect Identification for Centrifugal Compressor Blade Crack Based on Pressure Sensors and Genetic Algorithm

    PubMed Central

    Li, Hongkun; He, Changbo

    2018-01-01

    The Centrifugal compressor is a piece of key equipment for petrochemical factories. As the core component of a compressor, the blades suffer periodic vibration and flow induced excitation mechanism, which will lead to the occurrence of crack defect. Moreover, the induced blade defect usually has a serious impact on the normal operation of compressors and the safety of operators. Therefore, an effective blade crack identification method is particularly important for the reliable operation of compressors. Conventional non-destructive testing and evaluation (NDT&E) methods can detect the blade defect effectively, however, the compressors should shut down during the testing process which is time-consuming and costly. In addition, it can be known these methods are not suitable for the long-term on-line condition monitoring and cannot identify the blade defect in time. Therefore, the effective on-line condition monitoring and weak defect identification method should be further studied and proposed. Considering the blade vibration information is difficult to measure directly, pressure sensors mounted on the casing are used to sample airflow pressure pulsation signal on-line near the rotating impeller for the purpose of monitoring the blade condition indirectly in this paper. A big problem is that the blade abnormal vibration amplitude induced by the crack is always small and this feature information will be much weaker in the pressure signal. Therefore, it is usually difficult to identify blade defect characteristic frequency embedded in pressure pulsation signal by general signal processing methods due to the weakness of the feature information and the interference of strong noise. In this paper, continuous wavelet transform (CWT) is used to pre-process the sampled signal first. Then, the method of bistable stochastic resonance (SR) based on Woods-Saxon and Gaussian (WSG) potential is applied to enhance the weak characteristic frequency contained in the pressure

  2. Light weakly interacting massive particles

    NASA Astrophysics Data System (ADS)

    Gelmini, Graciela B.

    2017-08-01

    Light weakly interacting massive particles (WIMPs) are dark matter particle candidates with weak scale interaction with the known particles, and mass in the GeV to tens of GeV range. Hints of light WIMPs have appeared in several dark matter searches in the last decade. The unprecedented possible coincidence into tantalizingly close regions of mass and cross section of four separate direct detection experimental hints and a potential indirect detection signal in gamma rays from the galactic center, aroused considerable interest in our field. Even if these hints did not so far result in a discovery, they have had a significant impact in our field. Here we review the evidence for and against light WIMPs as dark matter candidates and discuss future relevant experiments and observations.

  3. Gravity as a Strong Prior: Implications for Perception and Action.

    PubMed

    Jörges, Björn; López-Moliner, Joan

    2017-01-01

    In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called "strong prior". As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities.

  4. Weak percolation on multiplex networks

    NASA Astrophysics Data System (ADS)

    Baxter, Gareth J.; Dorogovtsev, Sergey N.; Mendes, José F. F.; Cellai, Davide

    2014-04-01

    Bootstrap percolation is a simple but nontrivial model. It has applications in many areas of science and has been explored on random networks for several decades. In single-layer (simplex) networks, it has been recently observed that bootstrap percolation, which is defined as an incremental process, can be seen as the opposite of pruning percolation, where nodes are removed according to a connectivity rule. Here we propose models of both bootstrap and pruning percolation for multiplex networks. We collectively refer to these two models with the concept of "weak" percolation, to distinguish them from the somewhat classical concept of ordinary ("strong") percolation. While the two models coincide in simplex networks, we show that they decouple when considering multiplexes, giving rise to a wealth of critical phenomena. Our bootstrap model constitutes the simplest example of a contagion process on a multiplex network and has potential applications in critical infrastructure recovery and information security. Moreover, we show that our pruning percolation model may provide a way to diagnose missing layers in a multiplex network. Finally, our analytical approach allows us to calculate critical behavior and characterize critical clusters.

  5. Weak lensing in the Dark Energy Survey

    NASA Astrophysics Data System (ADS)

    Troxel, Michael

    2016-03-01

    I will present the current status of weak lensing results from the Dark Energy Survey (DES). DES will survey 5000 square degrees in five photometric bands (grizY), and has already provided a competitive weak lensing catalog from Science Verification data covering just 3% of the final survey footprint. I will summarize the status of shear catalog production using observations from the first year of the survey and discuss recent weak lensing science results from DES. Finally, I will report on the outlook for future cosmological analyses in DES including the two-point cosmic shear correlation function and discuss challenges that DES and future surveys will face in achieving a control of systematics that allows us to take full advantage of the available statistical power of our shear catalogs.

  6. Weakly supervised classification in high energy physics

    DOE PAGES

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...

    2017-05-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  7. Weakly supervised classification in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  8. Weak Interactions

    DOE R&D Accomplishments Database

    Lee, T. D.

    1957-06-01

    Experimental results on the non-conservation of parity and charge conservation in weak interactions are reviewed. The two-component theory of the neutrino is discussed. Lepton reactions are examined under the assumption of the law of conservation of leptons and that the neutrino is described by a two- component theory. From the results of this examination, the universal Fermi interactions are analyzed. Although reactions involving the neutrino can be described, the same is not true of reactions which do not involve the lepton, as the discussion of the decay of K mesons and hyperons shows. The question of the invariance of time reversal is next examined. (J.S.R.)

  9. Counterfactual statements and weak measurements: an experimental proposal

    NASA Astrophysics Data System (ADS)

    Mølmer, Klaus

    2001-12-01

    A recent analysis suggests that weak measurements can be used to give observational meaning to counterfactual reasoning in quantum physics. A weak measurement is predicted to assign a negative unit population to a specific state in an interferometric Gedankenexperiment proposed by Hardy. We propose an experimental implementation with trapped ions of the Gedankenexperiment and of the weak measurement. In our standard quantum mechanical analysis of the proposal no states have negative population, but we identify the registration of a negative population by particles being displaced on average in the direction opposite to a force acting upon them.

  10. Tight Bell Inequalities and Nonlocality in Weak Measurement

    NASA Astrophysics Data System (ADS)

    Waegell, Mordecai

    A general class of Bell inequalities is derived based on strict adherence to probabilistic entanglement correlations observed in nature. This derivation gives significantly tighter bounds on local hidden variable theories for the well-known Clauser-Horne-Shimony-Holt (CHSH) inequality, and also leads to new proofs of the Greenberger-Horne-Zeilinger (GHZ) theorem. This method is applied to weak measurements and reveals nonlocal correlations between the weak value and the post-selection, which rules out various classical models of weak measurement. Implications of these results are discussed. Fetzer-Franklin Fund of the John E. Fetzer Memorial Trust.

  11. Generic weak isolated horizons

    NASA Astrophysics Data System (ADS)

    Chatterjee, Ayan; Ghosh, Amit

    2006-12-01

    Weak isolated horizon boundary conditions have been relaxed supposedly to their weakest form so that both zeroth and first laws of black hole mechanics hold. This makes the formulation more amenable for applications in both analytic and numerical relativities. It also unifies the phase spaces of non-extremal and extremal black holes.

  12. Quantum Information Theory of Measurement

    NASA Astrophysics Data System (ADS)

    Glick, Jennifer Ranae

    Quantum measurement lies at the heart of quantum information processing and is one of the criteria for quantum computation. Despite its central role, there remains a need for a robust quantum information-theoretical description of measurement. In this work, I will quantify how information is processed in a quantum measurement by framing it in quantum information-theoretic terms. I will consider a diverse set of measurement scenarios, including weak and strong measurements, and parallel and consecutive measurements. In each case, I will perform a comprehensive analysis of the role of entanglement and entropy in the measurement process and track the flow of information through all subsystems. In particular, I will discuss how weak and strong measurements are fundamentally of the same nature and show that weak values can be computed exactly for certain measurements with an arbitrary interaction strength. In the context of the Bell-state quantum eraser, I will derive a trade-off between the coherence and "which-path" information of an entangled pair of photons and show that a quantum information-theoretic approach yields additional insights into the origins of complementarity. I will consider two types of quantum measurements: those that are made within a closed system where every part of the measurement device, the ancilla, remains under control (what I will call unamplified measurements), and those performed within an open system where some degrees of freedom are traced over (amplified measurements). For sequences of measurements of the same quantum system, I will show that information about the quantum state is encoded in the measurement chain and that some of this information is "lost" when the measurements are amplified-the ancillae become equivalent to a quantum Markov chain. Finally, using the coherent structure of unamplified measurements, I will outline a protocol for generating remote entanglement, an essential resource for quantum teleportation and quantum

  13. Weak values of spin and momentum in atomic systems.

    NASA Astrophysics Data System (ADS)

    Flack, Robert; Hiley, Basil; Barker, Peter; Monachello, Vincenzo; Morley, Joel

    2017-04-01

    Weak values have a long history and were first considered by Landau and London in connection with superfluids. Hirschfelder called them sub-observables and Dirac anticipatied them when discussing non-commutative geometry in quantum mechanics. The idea of a weak value has returned to prominence due to Aharonov, Albert and Vaidman showing how they can be measured. They are not eigenvalues of the system and can not be measured by a collapse of the wave function with the traditional Von Neumann (strong) measurement which is a single stage process. In contrast the weak measurement process has three stages; preselection, weak stage and finally a post selection. Although weak values have been observed using photons and neutrons, we are building two experiments to observe weak values of spin and momentum in atomic systems. For spin we are following the method outlined by Duck et al which is a variant on the original Stern-Gerlach experiment using a metastable, 23S1 , form of helium. For momentum we are using a method similar to that used by Kocsis with excited argon atoms in the 3P2 state, passing through a 2-slit interferometer. The design, simulation and re John Fetzer Memorial Trust.

  14. Sufficient conditions for uniqueness of the weak value

    NASA Astrophysics Data System (ADS)

    Dressel, J.; Jordan, A. N.

    2012-01-01

    We review and clarify the sufficient conditions for uniquely defining the generalized weak value as the weak limit of a conditioned average using the contextual values formalism introduced in Dressel, Agarwal and Jordan (2010 Phys. Rev. Lett. 104 240401). We also respond to criticism of our work by Parrott (arXiv:1105.4188v1) concerning a proposed counter-example to the uniqueness of the definition of the generalized weak value. The counter-example does not satisfy our prescription in the case of an underspecified measurement context. We show that when the contextual values formalism is properly applied to this example, a natural interpretation of the measurement emerges and the unique definition in the weak limit holds. We also prove a theorem regarding the uniqueness of the definition under our sufficient conditions for the general case. Finally, a second proposed counter-example by Parrott (arXiv:1105.4188v6) is shown not to satisfy the sufficiency conditions for the provided theorem.

  15. Co-Labeling for Multi-View Weakly Labeled Learning.

    PubMed

    Xu, Xinxing; Li, Wen; Xu, Dong; Tsang, Ivor W

    2016-06-01

    It is often expensive and time consuming to collect labeled training samples in many real-world applications. To reduce human effort on annotating training samples, many machine learning techniques (e.g., semi-supervised learning (SSL), multi-instance learning (MIL), etc.) have been studied to exploit weakly labeled training samples. Meanwhile, when the training data is represented with multiple types of features, many multi-view learning methods have shown that classifiers trained on different views can help each other to better utilize the unlabeled training samples for the SSL task. In this paper, we study a new learning problem called multi-view weakly labeled learning, in which we aim to develop a unified approach to learn robust classifiers by effectively utilizing different types of weakly labeled multi-view data from a broad range of tasks including SSL, MIL and relative outlier detection (ROD). We propose an effective approach called co-labeling to solve the multi-view weakly labeled learning problem. Specifically, we model the learning problem on each view as a weakly labeled learning problem, which aims to learn an optimal classifier from a set of pseudo-label vectors generated by using the classifiers trained from other views. Unlike traditional co-training approaches using a single pseudo-label vector for training each classifier, our co-labeling approach explores different strategies to utilize the predictions from different views, biases and iterations for generating the pseudo-label vectors, making our approach more robust for real-world applications. Moreover, to further improve the weakly labeled learning on each view, we also exploit the inherent group structure in the pseudo-label vectors generated from different strategies, which leads to a new multi-layer multiple kernel learning problem. Promising results for text-based image retrieval on the NUS-WIDE dataset as well as news classification and text categorization on several real-world multi

  16. Low dose tomographic fluoroscopy: 4D intervention guidance with running prior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, Barbara; Kuntz, Jan; Brehm, Marcus

    Purpose: Today's standard imaging technique in interventional radiology is the single- or biplane x-ray fluoroscopy which delivers 2D projection images as a function of time (2D+T). This state-of-the-art technology, however, suffers from its projective nature and is limited by the superposition of the patient's anatomy. Temporally resolved tomographic volumes (3D+T) would significantly improve the visualization of complex structures. A continuous tomographic data acquisition, if carried out with today's technology, would yield an excessive patient dose. Recently the authors proposed a method that enables tomographic fluoroscopy at the same dose level as projective fluoroscopy which means that if scanning time ofmore » an intervention guided by projective fluoroscopy is the same as that of an intervention guided by tomographic fluoroscopy, almost the same dose is administered to the patient. The purpose of this work is to extend authors' previous work and allow for patient motion during the intervention.Methods: The authors propose the running prior technique for adaptation of a prior image. This adaptation is realized by a combination of registration and projection replacement. In a first step the prior is deformed to the current position via affine and deformable registration. Then the information from outdated projections is replaced by newly acquired projections using forward and backprojection steps. The thus adapted volume is the running prior. The proposed method is validated by simulated as well as measured data. To investigate motion during intervention a moving head phantom was simulated. Real in vivo data of a pig are acquired by a prototype CT system consisting of a flat detector and a continuously rotating clinical gantry.Results: With the running prior technique it is possible to correct for motion without additional dose. For an application in intervention guidance both steps of the running prior technique, registration and replacement, are

  17. Weak Perturbations of Biochemical Oscillators

    NASA Astrophysics Data System (ADS)

    Gailey, Paul

    2001-03-01

    Biochemical oscillators may play important roles in gene regulation, circadian rhythms, physiological signaling, and sensory processes. These oscillations typically occur inside cells where the small numbers of reacting molecules result in fluctuations in the oscillation period. Some oscillation mechanisms have been reported that resist fluctuations and produce more stable oscillations. In this paper, we consider the use of biochemical oscillators as sensors by comparing inherent fluctuations with the effects of weak perturbations to one of the reactants. Such systems could be used to produce graded responses to weak stimuli. For example, a leading hypothesis to explain geomagnetic navigation in migrating birds and other animals is based on magnetochemical reactions. Because the magnitude of magnetochemical effects is small at geomagnetic field strengths, a sensitive, noise resistant detection scheme would be required.

  18. Weak photoacoustic signal detection based on the differential duffing oscillator

    NASA Astrophysics Data System (ADS)

    Li, Chenjing; Xu, Xuemei; Ding, Yipeng; Yin, Linzi; Dou, Beibei

    2018-04-01

    In view of photoacoustic spectroscopy theory, the relationship between weak photoacoustic signal and gas concentration is described. The studies, on the principle of Duffing oscillator for identifying state transition as well as determining the threshold value, have proven the feasibility of applying the Duffing oscillator in weak signal detection. An improved differential Duffing oscillator is proposed to identify weak signals with any frequency and ameliorate the signal-to-noise ratio. The analytical methods and numerical experiments of the novel model are introduced in detail to confirm its superiority. Then the signal detection system of weak photoacoustic based on differential Duffing oscillator is constructed, it is the first time that the weak signal detection method with differential Duffing oscillator is applied triumphantly in photoacoustic spectroscopy gas monitoring technology.

  19. The relation between prior knowledge and students' collaborative discovery learning processes

    NASA Astrophysics Data System (ADS)

    Gijlers, Hannie; de Jong, Ton

    2005-03-01

    In this study we investigate how prior knowledge influences knowledge development during collaborative discovery learning. Fifteen dyads of students (pre-university education, 15-16 years old) worked on a discovery learning task in the physics field of kinematics. The (face-to-face) communication between students was recorded and the interaction with the environment was logged. Based on students' individual judgments of the truth-value and testability of a series of domain-specific propositions, a detailed description of the knowledge configuration for each dyad was created before they entered the learning environment. Qualitative analyses of two dialogues illustrated that prior knowledge influences the discovery learning processes, and knowledge development in a pair of students. Assessments of student and dyad definitional (domain-specific) knowledge, generic (mathematical and graph) knowledge, and generic (discovery) skills were related to the students' dialogue in different discovery learning processes. Results show that a high level of definitional prior knowledge is positively related to the proportion of communication regarding the interpretation of results. Heterogeneity with respect to generic prior knowledge was positively related to the number of utterances made in the discovery process categories hypotheses generation and experimentation. Results of the qualitative analyses indicated that collaboration between extremely heterogeneous dyads is difficult when the high achiever is not willing to scaffold information and work in the low achiever's zone of proximal development.

  20. Proponent-Indigenous agreements and the implementation of the right to free, prior, and informed consent in Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papillon, Martin, E-mail: martin.papillon@umontreal.ca; Rodon, Thierry, E-mail: thierry.rodon@pol.ulaval.ca

    Indigenous peoples have gained considerable agency in shaping decisions regarding resource development on their traditional lands. This growing agency is reflected in the emergence of the right to free, prior, and informed consent (FPIC) when Indigenous rights may be adversely affected by major resource development projects. While many governments remain non-committal toward FPIC, corporate actors are more proactive at engaging with Indigenous peoples in seeking their consent to resource extraction projects through negotiated Impact and Benefit Agreements. Focusing on the Canadian context, this article discusses the roots and implications of a proponent-driven model for seeking Indigenous consent to natural resourcemore » extraction on their traditional lands. Building on two case studies, the paper argues that negotiated consent through IBAs offers a truncated version of FPIC from the perspective of the communities involved. The deliberative ethic at the core of FPIC is often undermined in the negotiation process associated with proponent-led IBAs. - Highlights: • FPIC is becoming a norm for resource extraction projects on Indigenous lands. • Proponent-led IBAs have become the main instrument to establish FPIC in Canada. • Case studies show elite-driven IBA negotiations do not always create the conditions for FPIC. • We need to pay attention to community deliberations as an inherent aspect of FPIC.« less

  1. Whole-body vibration to prevent intensive care unit-acquired weakness: safety, feasibility, and metabolic response.

    PubMed

    Wollersheim, Tobias; Haas, Kurt; Wolf, Stefan; Mai, Knut; Spies, Claudia; Steinhagen-Thiessen, Elisabeth; Wernecke, Klaus-D; Spranger, Joachim; Weber-Carstens, Steffen

    2017-01-09

    Intensive care unit (ICU)-acquired weakness in critically ill patients is a common and significant complication affecting the course of critical illness. Whole-body vibration is known to be effective muscle training and may be an option in diminishing weakness and muscle wasting. Especially, patients who are immobilized and not available for active physiotherapy may benefit. Until now whole-body vibration was not investigated in mechanically ventilated ICU patients. We investigated the safety, feasibility, and metabolic response of whole-body vibration in critically ill patients. We investigated 19 mechanically ventilated, immobilized ICU patients. Passive range of motion was performed prior to whole-body vibration therapy held in the supine position for 15 minutes. Continuous monitoring of vital signs, hemodynamics, and energy metabolism, as well as intermittent blood sampling, took place from the start of baseline measurements up to 1 hour post intervention. We performed comparative longitudinal analysis of the phases before, during, and after intervention. Vital signs and hemodynamic parameters remained stable with only minor changes resulting from the intervention. No application had to be interrupted. We did not observe any adverse event. Whole-body vibration did not significantly and/or clinically change vital signs and hemodynamics. A significant increase in energy expenditure during whole-body vibration could be observed. In our study the application of whole-body vibration was safe and feasible. The technique leads to increased energy expenditure. This may offer the chance to treat patients in the ICU with whole-body vibration. Further investigations should focus on the efficacy of whole-body vibration in the prevention of ICU-acquired weakness. Applicability and Safety of Vibration Therapy in Intensive Care Unit (ICU) Patients. ClinicalTrials.gov NCT01286610 . Registered 28 January 2011.

  2. pH-Dependent Liquid-Liquid Phase Separation of Highly Supersaturated Solutions of Weakly Basic Drugs.

    PubMed

    Indulkar, Anura S; Box, Karl J; Taylor, Robert; Ruiz, Rebeca; Taylor, Lynne S

    2015-07-06

    Supersaturated solutions of poorly aqueous soluble drugs can be formed both in vivo and in vitro. For example, increases in pH during gastrointestinal transit can decrease the aqueous solubility of weakly basic drugs resulting in supersaturation, in particular when exiting the acidic stomach environment. Recently, it has been observed that highly supersaturated solutions of drugs with low aqueous solubility can undergo liquid-liquid phase separation (LLPS) prior to crystallization, forming a turbid solution such that the concentration of the drug in the continuous solution phase corresponds to the amorphous solubility while the colloidal phase is composed of a disordered drug-rich phase. Although it is well established that the equilibrium solubility of crystalline weakly basic drugs follows the Henderson-Hasselbalch relationship, the impact of pH on the LLPS phenomenon or the amorphous solubility has not been explored. In this work, the LLPS concentration of three weakly basic compounds-clotrimazole, nicardipine, and atazanavir-was determined as a function of pH using three different methods and was compared to the predicted amorphous solubility, which was calculated from the pH-dependent crystalline solubility and by estimating the free energy difference between the amorphous and crystalline forms. It was observed that, similar to crystalline solubility, the experimental amorphous solubility at any pH follows the Henderson-Hasselbalch relation and can be predicted if the amorphous solubility of the free base is known. Excellent agreement between the LLPS concentration and the predicted amorphous solubility was observed. Dissolution studies of amorphous drugs showed that the solution concentration can reach the corresponding LLPS concentration at that pH. Solid-state analysis of the precipitated material confirmed the amorphous nature. This work provides insight into the pH-dependent precipitation behavior of poorly water-soluble compounds and provides a

  3. Bayesian network prior: network analysis of biological data using external knowledge

    PubMed Central

    Isci, Senol; Dogan, Haluk; Ozturk, Cengizhan; Otu, Hasan H.

    2014-01-01

    Motivation: Reverse engineering GI networks from experimental data is a challenging task due to the complex nature of the networks and the noise inherent in the data. One way to overcome these hurdles would be incorporating the vast amounts of external biological knowledge when building interaction networks. We propose a framework where GI networks are learned from experimental data using Bayesian networks (BNs) and the incorporation of external knowledge is also done via a BN that we call Bayesian Network Prior (BNP). BNP depicts the relation between various evidence types that contribute to the event ‘gene interaction’ and is used to calculate the probability of a candidate graph (G) in the structure learning process. Results: Our simulation results on synthetic, simulated and real biological data show that the proposed approach can identify the underlying interaction network with high accuracy even when the prior information is distorted and outperforms existing methods. Availability: Accompanying BNP software package is freely available for academic use at http://bioe.bilgi.edu.tr/BNP. Contact: hasan.otu@bilgi.edu.tr Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:24215027

  4. Prior health expenditures and risk sharing with insurers competing on quality.

    PubMed

    Marchand, Maurice; Sato, Motohiro; Schokkaert, Erik

    2003-01-01

    Insurers can exploit the heterogeneity within risk-adjustment classes to select the good risks because they have more information than the regulator on the expected expenditures of individual insurees. To counteract this cream skimming, mixed systems combining capitation and cost-based payments have been adopted that do not, however, generally use the past expenditures of insurees as a risk adjuster. In this article, two symmetric insurers compete for clients by differentiating the quality of service offered to them according to some private information about their risk. In our setting it is always welfare improving to use prior expenditures as a risk adjuster.

  5. The Simultaneous Determination of Muscle Cell pH Using a Weak Acid and Weak Base

    PubMed Central

    Adler, Sheldon

    1972-01-01

    Should significant pH heterogeneity exist within cells then the simultaneous calculation of intracellular pH from the distribution of a weak acid will give a value closest to the highest pH in the system, whereas calculation from the distribution of a weak base will give a value closer to the lowest pH. These two values should then differ significantly. Intact rat diaphragms were exposed in vitro to varying bicarbonate concentrations (pure metabolic) and CO2 tensions (pure respiratory), and steady-state cell pH was measured simultaneously either by distribution of the weak acid 5,5-dimethyloxazolidine-2,4-dione-14C (pH DMO) or by distribution of the weak base nicotine-14C (pH nicotine). The latter compound was found suitable to measure cell pH since it was neither metabolized nor bound by rat diaphragms. At an external pH of 7.40, pH DMO was 7.17 while pH nicotine was 6.69—a pH difference of 0.48 pH units (P < 0.001). In either respiratory or metabolic alkalosis both DMO and pH nicotine rose so that differences between them remained essentially constant. Metabolic acidosis induced a decrease in both values though they fell more slowly than did extracellular pH. In contradistinction, in respiratory acidosis, decreasing extracellular pH from 7.40 to 6.80 resulted in 0.35 pH unit drop in pH DMO while pH nicotine remained constant. In every experiment, under all external conditions, pH DMO exceeded pH nicotine. These results indicate that there is significant pH heterogeneity within diaphragm muscle, but the degree of heterogeneity may vary under different external conditions. The metabolic implications of these findings are discussed. In addition, the data show that true overall cell pH is between 6.69 and 7.17—a full pH higher than would be expected from thermodynamic considerations alone. This implies the presence of active processes to maintain cell pH. PMID:5009113

  6. Notes on the birth-death prior with fossil calibrations for Bayesian estimation of species divergence times.

    PubMed

    Dos Reis, Mario

    2016-07-19

    Constructing a multi-dimensional prior on the times of divergence (the node ages) of species in a phylogeny is not a trivial task, in particular, if the prior density is the result of combining different sources of information such as a speciation process with fossil calibration densities. Yang & Rannala (2006 Mol. Biol. Evol 23, 212-226. (doi:10.1093/molbev/msj024)) laid out the general approach to combine the birth-death process with arbitrary fossil-based densities to construct a prior on divergence times. They achieved this by calculating the density of node ages without calibrations conditioned on the ages of the calibrated nodes. Here, I show that the conditional density obtained by Yang & Rannala is misspecified. The misspecified density can sometimes be quite strange-looking and can lead to unintentionally informative priors on node ages without fossil calibrations. I derive the correct density and provide a few illustrative examples. Calculation of the density involves a sum over a large set of labelled histories, and so obtaining the density in a computer program seems hard at the moment. A general algorithm that may provide a way forward is given.This article is part of the themed issue 'Dating species divergences using rocks and clocks'. © 2016 The Author(s).

  7. Multichannel Speech Enhancement Based on Generalized Gamma Prior Distribution with Its Online Adaptive Estimation

    NASA Astrophysics Data System (ADS)

    Dat, Tran Huy; Takeda, Kazuya; Itakura, Fumitada

    We present a multichannel speech enhancement method based on MAP speech spectral magnitude estimation using a generalized gamma model of speech prior distribution, where the model parameters are adapted from actual noisy speech in a frame-by-frame manner. The utilization of a more general prior distribution with its online adaptive estimation is shown to be effective for speech spectral estimation in noisy environments. Furthermore, the multi-channel information in terms of cross-channel statistics are shown to be useful to better adapt the prior distribution parameters to the actual observation, resulting in better performance of speech enhancement algorithm. We tested the proposed algorithm in an in-car speech database and obtained significant improvements of the speech recognition performance, particularly under non-stationary noise conditions such as music, air-conditioner and open window.

  8. Maximum entropy, fluctuations and priors

    NASA Astrophysics Data System (ADS)

    Caticha, A.

    2001-05-01

    The method of maximum entropy (ME) is extended to address the following problem: Once one accepts that the ME distribution is to be preferred over all others, the question is to what extent are distributions with lower entropy supposed to be ruled out. Two applications are given. The first is to the theory of thermodynamic fluctuations. The formulation is exact, covariant under changes of coordinates, and allows fluctuations of both the extensive and the conjugate intensive variables. The second application is to the construction of an objective prior for Bayesian inference. The prior obtained by following the ME method to its inevitable conclusion turns out to be a special case (α=1) of what are currently known under the name of entropic priors. .

  9. Isotropy of Angular Frequencies and Weak Chimeras with Broken Symmetry

    NASA Astrophysics Data System (ADS)

    Bick, Christian

    2017-04-01

    The notion of a weak chimeras provides a tractable definition for chimera states in networks of finitely many phase oscillators. Here, we generalize the definition of a weak chimera to a more general class of equivariant dynamical systems by characterizing solutions in terms of the isotropy of their angular frequency vector—for coupled phase oscillators the angular frequency vector is given by the average of the vector field along a trajectory. Symmetries of solutions automatically imply angular frequency synchronization. We show that the presence of such symmetries is not necessary by giving a result for the existence of weak chimeras without instantaneous or setwise symmetries for coupled phase oscillators. Moreover, we construct a coupling function that gives rise to chaotic weak chimeras without symmetry in weakly coupled populations of phase oscillators with generalized coupling.

  10. Limited-angle multi-energy CT using joint clustering prior and sparsity regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Huayu; Xing, Yuxiang

    2016-03-01

    In this article, we present an easy-to-implement Multi-energy CT scanning strategy and a corresponding reconstruction method, which facilitate spectral CT imaging by improving the data efficiency the number-of-energy- channel fold without introducing visible limited-angle artifacts caused by reducing projection views. Leveraging the structure coherence at different energies, we first pre-reconstruct a prior structure information image using projection data from all energy channels. Then, we perform a k-means clustering on the prior image to generate a sparse dictionary representation for the image, which severs as a structure information constraint. We com- bine this constraint with conventional compressed sensing method and proposed a new model which we referred as Joint Clustering Prior and Sparsity Regularization (CPSR). CPSR is a convex problem and we solve it by Alternating Direction Method of Multipliers (ADMM). We verify our CPSR reconstruction method with a numerical simulation experiment. A dental phantom with complicate structures of teeth and soft tissues is used. X-ray beams from three spectra of different peak energies (120kVp, 90kVp, 60kVp) irradiate the phantom to form tri-energy projections. Projection data covering only 75◦ from each energy spectrum are collected for reconstruction. Independent reconstruction for each energy will cause severe limited-angle artifacts even with the help of compressed sensing approaches. Our CPSR provides us with images free of the limited-angle artifact. All edge details are well preserved in our experimental study.

  11. ASTROPHYSICAL PRIOR INFORMATION AND GRAVITATIONAL-WAVE PARAMETER ESTIMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pankow, Chris; Sampson, Laura; Perri, Leah

    The detection of electromagnetic counterparts to gravitational waves (GWs) has great promise for the investigation of many scientific questions. While it is well known that certain orientation parameters can reduce uncertainty in other related parameters, it was also hoped that the detection of an electromagnetic signal in conjunction with a GW could augment the measurement precision of the mass and spin from the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters that are intrinsic to the binary. In this paper,more » we investigate this issue by assuming perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. We recover similar gains in extrinsic recovery to earlier work; however, we find only modest improvements in a few intrinsic parameters—namely the primary component’s spin. We thus conclude that, even in the best case, the use of additional information from electromagnetic observations does not improve the measurement of the intrinsic parameters significantly.« less

  12. Probing finite coarse-grained virtual Feynman histories with sequential weak values

    NASA Astrophysics Data System (ADS)

    Georgiev, Danko; Cohen, Eliahu

    2018-05-01

    Feynman's sum-over-histories formulation of quantum mechanics has been considered a useful calculational tool in which virtual Feynman histories entering into a coherent quantum superposition cannot be individually measured. Here we show that sequential weak values, inferred by consecutive weak measurements of projectors, allow direct experimental probing of individual virtual Feynman histories, thereby revealing the exact nature of quantum interference of coherently superposed histories. Because the total sum of sequential weak values of multitime projection operators for a complete set of orthogonal quantum histories is unity, complete sets of weak values could be interpreted in agreement with the standard quantum mechanical picture. We also elucidate the relationship between sequential weak values of quantum histories with different coarse graining in time and establish the incompatibility of weak values for nonorthogonal quantum histories in history Hilbert space. Bridging theory and experiment, the presented results may enhance our understanding of both weak values and quantum histories.

  13. Efficiency of weak brain connections support general cognitive functioning.

    PubMed

    Santarnecchi, Emiliano; Galli, Giulia; Polizzotto, Nicola Riccardo; Rossi, Alessandro; Rossi, Simone

    2014-09-01

    Brain network topology provides valuable information on healthy and pathological brain functioning. Novel approaches for brain network analysis have shown an association between topological properties and cognitive functioning. Under the assumption that "stronger is better", the exploration of brain properties has generally focused on the connectivity patterns of the most strongly correlated regions, whereas the role of weaker brain connections has remained obscure for years. Here, we assessed whether the different strength of connections between brain regions may explain individual differences in intelligence. We analyzed-functional connectivity at rest in ninety-eight healthy individuals of different age, and correlated several connectivity measures with full scale, verbal, and performance Intelligent Quotients (IQs). Our results showed that the variance in IQ levels was mostly explained by the distributed communication efficiency of brain networks built using moderately weak, long-distance connections, with only a smaller contribution of stronger connections. The variability in individual IQs was associated with the global efficiency of a pool of regions in the prefrontal lobes, hippocampus, temporal pole, and postcentral gyrus. These findings challenge the traditional view of a prominent role of strong functional brain connections in brain topology, and highlight the importance of both strong and weak connections in determining the functional architecture responsible for human intelligence variability. Copyright © 2014 Wiley Periodicals, Inc.

  14. Utilizing Weak Indicators to Detect Anomalous Behaviors in Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egid, Adin Ezra

    We consider the use of a novel weak in- dicator alongside more commonly used weak indicators to help detect anomalous behavior in a large computer network. The data of the network which we are studying in this research paper concerns remote log-in information (Virtual Private Network, or VPN sessions) from the internal network of Los Alamos National Laboratory (LANL). The novel indicator we are utilizing is some- thing which, while novel in its application to data science/cyber security research, is a concept borrowed from the business world. The Her ndahl-Hirschman Index (HHI) is a computationally trivial index which provides amore » useful heuristic for regulatory agencies to ascertain the relative competitiveness of a particular industry. Using this index as a lagging indicator in the monthly format we have studied could help to detect anomalous behavior by a particular or small set of users on the network. Additionally, we study indicators related to the speed of movement of a user based on the physical location of their current and previous logins. This data can be ascertained from the IP addresses of the users, and is likely very similar to the fraud detection schemes regularly utilized by credit card networks to detect anomalous activity. In future work we would look to nd a way to combine these indicators for use as an internal fraud detection system.« less

  15. Early warnings, weak signals and learning from healthcare disasters.

    PubMed

    Macrae, Carl

    2014-06-01

    In the wake of healthcare disasters, such as the appalling failures of care uncovered in Mid Staffordshire, inquiries and investigations often point to a litany of early warnings and weak signals that were missed, misunderstood or discounted by the professionals and organisations charged with monitoring the safety and quality of care. Some of the most urgent challenges facing those responsible for improving and regulating patient safety are therefore how to identify, interpret, integrate and act on the early warnings and weak signals of emerging risks-before those risks contribute to a disastrous failure of care. These challenges are fundamentally organisational and cultural: they relate to what information is routinely noticed, communicated and attended to within and between healthcare organisations-and, most critically, what is assumed and ignored. Analysing these organisational and cultural challenges suggests three practical ways that healthcare organisations and their regulators can improve safety and address emerging risks. First, engage in practices that actively produce and amplify fleeting signs of ignorance. Second, work to continually define and update a set of specific fears of failure. And third, routinely uncover and publicly circulate knowledge on the sources of systemic risks to patient safety and the improvements required to address them. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition.

    PubMed

    Wong, Sebastien C; Stamatescu, Victor; Gatt, Adam; Kearney, David; Lee, Ivan; McDonnell, Mark D

    2017-10-01

    This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.

  17. Analysis of the Bayesian Cramér-Rao lower bound in astrometry. Studying the impact of prior information in the location of an object

    NASA Astrophysics Data System (ADS)

    Echeverria, Alex; Silva, Jorge F.; Mendez, Rene A.; Orchard, Marcos

    2016-10-01

    Context. The best precision that can be achieved to estimate the location of a stellar-like object is a topic of permanent interest in the astrometric community. Aims: We analyze bounds for the best position estimation of a stellar-like object on a CCD detector array in a Bayesian setting where the position is unknown, but where we have access to a prior distribution. In contrast to a parametric setting where we estimate a parameter from observations, the Bayesian approach estimates a random object (I.e., the position is a random variable) from observations that are statistically dependent on the position. Methods: We characterize the Bayesian Cramér-Rao (CR) that bounds the minimum mean square error (MMSE) of the best estimator of the position of a point source on a linear CCD-like detector, as a function of the properties of detector, the source, and the background. Results: We quantify and analyze the increase in astrometric performance from the use of a prior distribution of the object position, which is not available in the classical parametric setting. This gain is shown to be significant for various observational regimes, in particular in the case of faint objects or when the observations are taken under poor conditions. Furthermore, we present numerical evidence that the MMSE estimator of this problem tightly achieves the Bayesian CR bound. This is a remarkable result, demonstrating that all the performance gains presented in our analysis can be achieved with the MMSE estimator. Conclusions: The Bayesian CR bound can be used as a benchmark indicator of the expected maximum positional precision of a set of astrometric measurements in which prior information can be incorporated. This bound can be achieved through the conditional mean estimator, in contrast to the parametric case where no unbiased estimator precisely reaches the CR bound.

  18. Looking for heavier weak bosons with DUMAND

    NASA Technical Reports Server (NTRS)

    Brown, R. W.; Stecker, F. W.

    1980-01-01

    One or more heavier weak bosons may coexist with the standard weak boson, a broad program may be laid out for a search for the heavier W's via change in the total cross section due to the additional propagator, a concomitant search, and a subsequent search for significant antimatter in the universe involving the same annihilation, but being independent of possible neutrino oscillations. The program is likely to require detectors sensitive to higher energies, such as acoustic detectors.

  19. Internet Usage by Parents Prior to Seeking Care at a Pediatric Emergency Department: Observational Study

    PubMed Central

    2017-01-01

    Background Little is known about how parents utilize medical information on the Internet prior to an emergency department (ED) visit. Objective The objective of the study was to determine the proportion of parents who accessed the Internet for medical information related to their child’s illness in the 24 hours prior to an ED visit (IPED), to identify the websites used, and to understand how the content contributed to the decision to visit the ED. Methods A 40-question interview was conducted with parents presenting to an ED within a freestanding children’s hospital. If parents reported IPED, the number and names of websites were documented. Parents indicated the helpfulness of Web-based content using a 100-mm visual analog scale and the degree to which it contributed to the decision to visit the ED using 5-point Likert-type responses. Results About 11.8 % (31/262) reported IPED (95% CI 7.3-5.3). Parents who reported IPED were more likely to have at least some college education (P=.04), higher annual household income (P=.001), and older children (P=.04) than those who did not report IPED. About 35% (11/31) could not name any websites used. Mean level of helpfulness of Web-based content was 62 mm (standard deviation, SD=25 mm). After Internet use, some parents (29%, 9/31) were more certain they needed to visit the ED, whereas 19% (6/31) were less certain. A majority (87%, 195/224) of parents who used the Internet stated that they would be somewhat likely or very likely to visit a website recommended by a physician. Conclusions Nearly 1 out of 8 parents presenting to an urban pediatric ED reported using the Internet in the 24 hours prior to the ED visit. Among privately insured, at least one in 5 parents reported using the Internet prior to visiting the ED. Web-based medical information often influences decision making regarding ED utilization. Pediatric providers should provide parents with recommendations for high-quality sources of health information available

  20. When Relationships Depicted Diagrammatically Conflict with Prior Knowledge: An Investigation of Students' Interpretations of Evolutionary Trees

    ERIC Educational Resources Information Center

    Novick, Laura R.; Catley, Kefyn M.

    2014-01-01

    Science is an important domain for investigating students' responses to information that contradicts their prior knowledge. In previous studies of this topic, this information was communicated verbally. The present research used diagrams, specifically trees (cladograms) depicting evolutionary relationships among taxa. Effects of college…