ERIC Educational Resources Information Center
McCabe, Declan J.; Knight, Evelyn J.
2016-01-01
Since being introduced by Connor and Simberloff in response to Diamond's assembly rules, null model analysis has been a controversial tool in community ecology. Despite being commonly used in the primary literature, null model analysis has not featured prominently in general textbooks. Complexity of approaches along with difficulty in interpreting…
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
Bayesian inference for psychology, part IV: parameter estimation and Bayes factors.
Rouder, Jeffrey N; Haaf, Julia M; Vandekerckhove, Joachim
2018-02-01
In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst.
Interpreting null results from measurements with uncertain correlations: an info-gap approach.
Ben-Haim, Yakov
2011-01-01
Null events—not detecting a pernicious agent—are the basis for declaring the agent is absent. Repeated nulls strengthen confidence in the declaration. However, correlations between observations are difficult to assess in many situations and introduce uncertainty in interpreting repeated nulls. We quantify uncertain correlations using an info-gap model, which is an unbounded family of nested sets of possible probabilities. An info-gap model is nonprobabilistic and entails no assumption about a worst case. We then evaluate the robustness, to uncertain correlations, of estimates of the probability of a null event. This is then the basis for evaluating a nonprobabilistic robustness-based confidence interval for the probability of a null. © 2010 Society for Risk Analysis.
Significance Testing in Confirmatory Factor Analytic Models.
ERIC Educational Resources Information Center
Khattab, Ali-Maher; Hocevar, Dennis
Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…
Evaluating thermoregulation in reptiles: an appropriate null model.
Christian, Keith A; Tracy, Christopher R; Tracy, C Richard
2006-09-01
Established indexes of thermoregulation in ectotherms compare body temperatures of real animals with a null distribution of operative temperatures from a physical or mathematical model with the same size, shape, and color as the actual animal but without mass. These indexes, however, do not account for thermal inertia or the effects of inertia when animals move through thermally heterogeneous environments. Some recent models have incorporated body mass, to account for thermal inertia and the physiological control of warming and cooling rates seen in most reptiles, and other models have incorporated movement through the environment, but none includes all pertinent variables explaining body temperature. We present a new technique for calculating the distribution of body temperatures available to ectotherms that have thermal inertia, random movements, and different rates of warming and cooling. The approach uses a biophysical model of heat exchange in ectotherms and a model of random interaction with thermal environments over the course of a day to create a null distribution of body temperatures that can be used with conventional thermoregulation indexes. This new technique provides an unbiased method for evaluating thermoregulation in large ectotherms that store heat while moving through complex environments, but it can also generate null models for ectotherms of all sizes.
Observation-Oriented Modeling: Going beyond "Is It All a Matter of Chance"?
ERIC Educational Resources Information Center
Grice, James W.; Yepez, Maria; Wilson, Nicole L.; Shoda, Yuichi
2017-01-01
An alternative to null hypothesis significance testing is presented and discussed. This approach, referred to as observation-oriented modeling, is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. In terms of analysis, this novel approach complements traditional methods…
Manna, Soumen K.; Patterson, Andrew D.; Yang, Qian; Krausz, Kristopher W.; Li, Henghong; Idle, Jeffrey R.; Fornace, Albert J.; Gonzalez, Frank J.
2010-01-01
Alcohol-induced liver disease (ALD) is a leading cause of non-accident-related deaths in the United States. Although liver damage caused by ALD is reversible when discovered at the earlier stages, current risk assessment tools are relatively non-specific. Identification of an early specific signature of ALD would aid in therapeutic intervention and recovery. In this study the metabolic changes associated with alcohol-induced liver disease were examined using alcohol-fed male Ppara-null mouse as a model of ALD. Principal components analysis of the mass spectrometry-based urinary metabolic profile showed that alcohol-treated wild-type and Ppara-null mice could be distinguished from control animals without information on history of alcohol consumption. The urinary excretion of ethyl-sulfate, ethyl-β-D-glucuronide, 4-hydroxyphenylacetic acid, and 4-hydroxyphenylacetic acid sulfate was elevated and that of the 2-hydroxyphenylacetic acid, adipic acid, and pimelic acid was depleted during alcohol treatment in both wild-type and the Ppara-null mice albeit to different extents. However, indole-3-lactic acid was exclusively elevated by alcohol exposure in Ppara-null mice. The elevation of indole-3-lactic acid is mechanistically related to the molecular events associated with development of ALD in alcohol-treated Ppara-null mice. This study demonstrated the ability of metabolomics approach to identify early, noninvasive biomarkers of ALD pathogenesis in Ppara-null mouse model. PMID:20540569
Govindan, R B; Kota, Srinivas; Al-Shargabi, Tareq; Massaro, An N; Chang, Taeun; du Plessis, Adre
2016-09-01
Electroencephalogram (EEG) signals are often contaminated by the electrocardiogram (ECG) interference, which affects quantitative characterization of EEG. We propose null-coherence, a frequency-based approach, to attenuate the ECG interference in EEG using simultaneously recorded ECG as a reference signal. After validating the proposed approach using numerically simulated data, we apply this approach to EEG recorded from six newborns receiving therapeutic hypothermia for neonatal encephalopathy. We compare our approach with an independent component analysis (ICA), a previously proposed approach to attenuate ECG artifacts in the EEG signal. The power spectrum and the cortico-cortical connectivity of the ECG attenuated EEG was compared against the power spectrum and the cortico-cortical connectivity of the raw EEG. The null-coherence approach attenuated the ECG contamination without leaving any residual of the ECG in the EEG. We show that the null-coherence approach performs better than ICA in attenuating the ECG contamination without enhancing cortico-cortical connectivity. Our analysis suggests that using ICA to remove ECG contamination from the EEG suffers from redistribution problems, whereas the null-coherence approach does not. We show that both the null-coherence and ICA approaches attenuate the ECG contamination. However, the EEG obtained after ICA cleaning displayed higher cortico-cortical connectivity compared with that obtained using the null-coherence approach. This suggests that null-coherence is superior to ICA in attenuating the ECG interference in EEG for cortico-cortical connectivity analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.
ERIC Educational Resources Information Center
Liu, Tung; Stone, Courtenay C.
1999-01-01
Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…
Concerns regarding a call for pluralism of information theory and hypothesis testing
Lukacs, P.M.; Thompson, W.L.; Kendall, W.L.; Gould, W.R.; Doherty, P.F.; Burnham, K.P.; Anderson, D.R.
2007-01-01
1. Stephens et al . (2005) argue for `pluralism? in statistical analysis, combining null hypothesis testing and information-theoretic (I-T) methods. We show that I-T methods are more informative even in single variable problems and we provide an ecological example. 2. I-T methods allow inferences to be made from multiple models simultaneously. We believe multimodel inference is the future of data analysis, which cannot be achieved with null hypothesis-testing approaches. 3. We argue for a stronger emphasis on critical thinking in science in general and less reliance on exploratory data analysis and data dredging. Deriving alternative hypotheses is central to science; deriving a single interesting science hypothesis and then comparing it to a default null hypothesis (e.g. `no difference?) is not an efficient strategy for gaining knowledge. We think this single-hypothesis strategy has been relied upon too often in the past. 4. We clarify misconceptions presented by Stephens et al . (2005). 5. We think inference should be made about models, directly linked to scientific hypotheses, and their parameters conditioned on data, Prob(Hj| data). I-T methods provide a basis for this inference. Null hypothesis testing merely provides a probability statement about the data conditioned on a null model, Prob(data |H0). 6. Synthesis and applications. I-T methods provide a more informative approach to inference. I-T methods provide a direct measure of evidence for or against hypotheses and a means to consider simultaneously multiple hypotheses as a basis for rigorous inference. Progress in our science can be accelerated if modern methods can be used intelligently; this includes various I-T and Bayesian methods.
A Monte Carlo Approach to Unidimensionality Testing in Polytomous Rasch Models
ERIC Educational Resources Information Center
Christensen, Karl Bang; Kreiner, Svend
2007-01-01
Many statistical tests are designed to test the different assumptions of the Rasch model, but only few are directed at detecting multidimensionality. The Martin-Lof test is an attractive approach, the disadvantage being that its null distribution deviates strongly from the asymptotic chi-square distribution for most realistic sample sizes. A Monte…
Statistical modeling, detection, and segmentation of stains in digitized fabric images
NASA Astrophysics Data System (ADS)
Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.
2007-02-01
This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.
Qualification of a Null Lens Using Image-Based Phase Retrieval
NASA Technical Reports Server (NTRS)
Bolcar, Matthew R.; Aronstein, David L.; Hill, Peter C.; Smith, J. Scott; Zielinski, Thomas P.
2012-01-01
In measuring the figure error of an aspheric optic using a null lens, the wavefront contribution from the null lens must be independently and accurately characterized in order to isolate the optical performance of the aspheric optic alone. Various techniques can be used to characterize such a null lens, including interferometry, profilometry and image-based methods. Only image-based methods, such as phase retrieval, can measure the null-lens wavefront in situ - in single-pass, and at the same conjugates and in the same alignment state in which the null lens will ultimately be used - with no additional optical components. Due to the intended purpose of a Dull lens (e.g., to null a large aspheric wavefront with a near-equal-but-opposite spherical wavefront), characterizing a null-lens wavefront presents several challenges to image-based phase retrieval: Large wavefront slopes and high-dynamic-range data decrease the capture range of phase-retrieval algorithms, increase the requirements on the fidelity of the forward model of the optical system, and make it difficult to extract diagnostic information (e.g., the system F/#) from the image data. In this paper, we present a study of these effects on phase-retrieval algorithms in the context of a null lens used in component development for the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission. Approaches for mitigation are also discussed.
Bi-dimensional null model analysis of presence-absence binary matrices.
Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J
2018-01-01
Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Goovaerts, Pierre; Jacquez, Geoffrey M
2004-01-01
Background Complete Spatial Randomness (CSR) is the null hypothesis employed by many statistical tests for spatial pattern, such as local cluster or boundary analysis. CSR is however not a relevant null hypothesis for highly complex and organized systems such as those encountered in the environmental and health sciences in which underlying spatial pattern is present. This paper presents a geostatistical approach to filter the noise caused by spatially varying population size and to generate spatially correlated neutral models that account for regional background obtained by geostatistical smoothing of observed mortality rates. These neutral models were used in conjunction with the local Moran statistics to identify spatial clusters and outliers in the geographical distribution of male and female lung cancer in Nassau, Queens, and Suffolk counties, New York, USA. Results We developed a typology of neutral models that progressively relaxes the assumptions of null hypotheses, allowing for the presence of spatial autocorrelation, non-uniform risk, and incorporation of spatially heterogeneous population sizes. Incorporation of spatial autocorrelation led to fewer significant ZIP codes than found in previous studies, confirming earlier claims that CSR can lead to over-identification of the number of significant spatial clusters or outliers. Accounting for population size through geostatistical filtering increased the size of clusters while removing most of the spatial outliers. Integration of regional background into the neutral models yielded substantially different spatial clusters and outliers, leading to the identification of ZIP codes where SMR values significantly depart from their regional background. Conclusion The approach presented in this paper enables researchers to assess geographic relationships using appropriate null hypotheses that account for the background variation extant in real-world systems. In particular, this new methodology allows one to identify geographic pattern above and beyond background variation. The implementation of this approach in spatial statistical software will facilitate the detection of spatial disparities in mortality rates, establishing the rationale for targeted cancer control interventions, including consideration of health services needs, and resource allocation for screening and diagnostic testing. It will allow researchers to systematically evaluate how sensitive their results are to assumptions implicit under alternative null hypotheses. PMID:15272930
A Quantitative Approach to Scar Analysis
Khorasani, Hooman; Zheng, Zhong; Nguyen, Calvin; Zara, Janette; Zhang, Xinli; Wang, Joyce; Ting, Kang; Soo, Chia
2011-01-01
Analysis of collagen architecture is essential to wound healing research. However, to date no consistent methodologies exist for quantitatively assessing dermal collagen architecture in scars. In this study, we developed a standardized approach for quantitative analysis of scar collagen morphology by confocal microscopy using fractal dimension and lacunarity analysis. Full-thickness wounds were created on adult mice, closed by primary intention, and harvested at 14 days after wounding for morphometrics and standard Fourier transform-based scar analysis as well as fractal dimension and lacunarity analysis. In addition, transmission electron microscopy was used to evaluate collagen ultrastructure. We demonstrated that fractal dimension and lacunarity analysis were superior to Fourier transform analysis in discriminating scar versus unwounded tissue in a wild-type mouse model. To fully test the robustness of this scar analysis approach, a fibromodulin-null mouse model that heals with increased scar was also used. Fractal dimension and lacunarity analysis effectively discriminated unwounded fibromodulin-null versus wild-type skin as well as healing fibromodulin-null versus wild-type wounds, whereas Fourier transform analysis failed to do so. Furthermore, fractal dimension and lacunarity data also correlated well with transmission electron microscopy collagen ultrastructure analysis, adding to their validity. These results demonstrate that fractal dimension and lacunarity are more sensitive than Fourier transform analysis for quantification of scar morphology. PMID:21281794
A musculoskeletal shoulder model based on pseudo-inverse and null-space optimization.
Terrier, Alexandre; Aeberhard, Martin; Michellod, Yvan; Mullhaupt, Philippe; Gillet, Denis; Farron, Alain; Pioletti, Dominique P
2010-11-01
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.
A Null Space Control of Two Wheels Driven Mobile Manipulator Using Passivity Theory
NASA Astrophysics Data System (ADS)
Shibata, Tsuyoshi; Murakami, Toshiyuki
This paper describes a control strategy of null space motion of a two wheels driven mobile manipulator. Recently, robot is utilized in various industrial fields and it is preferable for the robot manipulator to have multiple degrees of freedom motion. Several studies of kinematics for null space motion have been proposed. However stability analysis of null space motion is not enough. Furthermore, these approaches apply to stable systems, but they do not apply unstable systems. Then, in this research, base of manipulator equips with two wheels driven mobile robot. This robot is called two wheels driven mobile manipulator, which becomes unstable system. In the proposed approach, a control design of null space uses passivity based stabilizing. A proposed controller is decided so that closed-loop system of robot dynamics satisfies passivity. This is passivity based control. Then, control strategy is that stabilizing of the robot system applies to work space observer based approach and null space control while keeping end-effector position. The validity of the proposed approach is verified by simulations and experiments of two wheels driven mobile manipulator.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
An improved null model for assessing the net effects of multiple stressors on communities.
Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D
2018-01-01
Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our compositional null model. © 2017 John Wiley & Sons Ltd.
Functional Linear Model with Zero-value Coefficient Function at Sub-regions.
Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin
2013-01-01
We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in physical functioning at old age, and in identifying the effective age ranges where such associations exist.
The Importance of Proving the Null
Gallistel, C. R.
2010-01-01
Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? PMID:19348549
Testing Small Variance Priors Using Prior-Posterior Predictive p Values.
Hoijtink, Herbert; van de Schoot, Rens
2017-04-03
Muthén and Asparouhov (2012) propose to evaluate model fit in structural equation models based on approximate (using small variance priors) instead of exact equality of (combinations of) parameters to zero. This is an important development that adequately addresses Cohen's (1994) The Earth is Round (p < .05), which stresses that point null-hypotheses are so precise that small and irrelevant differences from the null-hypothesis may lead to their rejection. It is tempting to evaluate small variance priors using readily available approaches like the posterior predictive p value and the DIC. However, as will be shown, both are not suited for the evaluation of models based on small variance priors. In this article, a well behaving alternative, the prior-posterior predictive p value, will be introduced. It will be shown that it is consistent, the distributions under the null and alternative hypotheses will be elaborated, and it will be applied to testing whether the difference between 2 means and the size of a correlation are relevantly different from zero. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Canal, G. P.; Ferraro, N. M.; Evans, T. E.; ...
2017-04-20
Here in this work, single- and two-fluid resistive magnetohydrodynamic calculations of the plasma response to n = 3 magnetic perturbations in single-null (SN) and snowflake (SF) divertor configurations are compared with those based on the vacuum approach. The calculations are performed using the code M3D-C 1 and are based on simulated NSTX-U plasmas. Significantly different plasma responses were found from these calculations, with the difference between the single- and two-fluid plasma responses being caused mainly by the different screening mechanism intrinsic to each of these models. Although different plasma responses were obtained from these different plasma models, no significant differencemore » between the SN and SF plasma responses were found. However, due to their different equilibrium properties, magnetic perturbations cause the SF configuration to develop additional and longer magnetic lobes in the null-point region than the SN, regardless of the plasma model used. The intersection of these longer and additional lobes with the divertor plates are expected to cause more striations in the particle and heat flux target profiles. In addition, the results indicate that the size of the magnetic lobes, in both single-null and snowflake configurations, are more sensitive to resonant magnetic perturbations than to non-resonant magnetic perturbations.« less
Kumar, Ramiya; Mota, Linda C.; Litoff, Elizabeth J.; Rooney, John P.; Boswell, W. Tyler; Courter, Elliott; Henderson, Charles M.; Hernandez, Juan P.; Corton, J. Christopher; Moore, David D.
2017-01-01
Targeted mutant models are common in mechanistic toxicology experiments investigating the absorption, metabolism, distribution, or elimination (ADME) of chemicals from individuals. Key models include those for xenosensing transcription factors and cytochrome P450s (CYP). Here we investigated changes in transcript levels, protein expression, and steroid hydroxylation of several xenobiotic detoxifying CYPs in constitutive androstane receptor (CAR)-null and two CYP-null mouse models that have subfamily members regulated by CAR; the Cyp3a-null and a newly described Cyp2b9/10/13-null mouse model. Compensatory changes in CYP expression that occur in these models may also occur in polymorphic humans, or may complicate interpretation of ADME studies performed using these models. The loss of CAR causes significant changes in several CYPs probably due to loss of CAR-mediated constitutive regulation of these CYPs. Expression and activity changes include significant repression of Cyp2a and Cyp2b members with corresponding drops in 6α- and 16β-testosterone hydroxylase activity. Further, the ratio of 6α-/15α-hydroxylase activity, a biomarker of sexual dimorphism in the liver, indicates masculinization of female CAR-null mice, suggesting a role for CAR in the regulation of sexually dimorphic liver CYP profiles. The loss of Cyp3a causes fewer changes than CAR. Nevertheless, there are compensatory changes including gender-specific increases in Cyp2a and Cyp2b. Cyp2a and Cyp2b were down-regulated in CAR-null mice, suggesting activation of CAR and potentially PXR following loss of the Cyp3a members. However, the loss of Cyp2b causes few changes in hepatic CYP transcript levels and almost no significant compensatory changes in protein expression or activity with the possible exception of 6α-hydroxylase activity. This lack of a compensatory response in the Cyp2b9/10/13-null mice is probably due to low CYP2B hepatic expression, especially in male mice. Overall, compensatory and regulatory CYP changes followed the order CAR-null > Cyp3a-null > Cyp2b-null mice. PMID:28350814
Kumar, Ramiya; Mota, Linda C; Litoff, Elizabeth J; Rooney, John P; Boswell, W Tyler; Courter, Elliott; Henderson, Charles M; Hernandez, Juan P; Corton, J Christopher; Moore, David D; Baldwin, William S
2017-01-01
Targeted mutant models are common in mechanistic toxicology experiments investigating the absorption, metabolism, distribution, or elimination (ADME) of chemicals from individuals. Key models include those for xenosensing transcription factors and cytochrome P450s (CYP). Here we investigated changes in transcript levels, protein expression, and steroid hydroxylation of several xenobiotic detoxifying CYPs in constitutive androstane receptor (CAR)-null and two CYP-null mouse models that have subfamily members regulated by CAR; the Cyp3a-null and a newly described Cyp2b9/10/13-null mouse model. Compensatory changes in CYP expression that occur in these models may also occur in polymorphic humans, or may complicate interpretation of ADME studies performed using these models. The loss of CAR causes significant changes in several CYPs probably due to loss of CAR-mediated constitutive regulation of these CYPs. Expression and activity changes include significant repression of Cyp2a and Cyp2b members with corresponding drops in 6α- and 16β-testosterone hydroxylase activity. Further, the ratio of 6α-/15α-hydroxylase activity, a biomarker of sexual dimorphism in the liver, indicates masculinization of female CAR-null mice, suggesting a role for CAR in the regulation of sexually dimorphic liver CYP profiles. The loss of Cyp3a causes fewer changes than CAR. Nevertheless, there are compensatory changes including gender-specific increases in Cyp2a and Cyp2b. Cyp2a and Cyp2b were down-regulated in CAR-null mice, suggesting activation of CAR and potentially PXR following loss of the Cyp3a members. However, the loss of Cyp2b causes few changes in hepatic CYP transcript levels and almost no significant compensatory changes in protein expression or activity with the possible exception of 6α-hydroxylase activity. This lack of a compensatory response in the Cyp2b9/10/13-null mice is probably due to low CYP2B hepatic expression, especially in male mice. Overall, compensatory and regulatory CYP changes followed the order CAR-null > Cyp3a-null > Cyp2b-null mice.
Approaches for Achieving Broadband Achromatic Phase Shifts for Visible Nulling Coronagraphy
NASA Technical Reports Server (NTRS)
Bolcar, Matthew R.; Lyon, Richard G.
2012-01-01
Visible nulling coronagraphy is one of the few approaches to the direct detection and characterization of Jovian and Terrestrial exoplanets that works with segmented aperture telescopes. Jovian and Terrestrial planets require at least 10(exp -9) and 10(exp -10) image plane contrasts, respectively, within the spectral bandpass and thus require a nearly achromatic pi-phase difference between the arms of the interferometer. An achromatic pi-phase shift can be achieved by several techniques, including sequential angled thick glass plates of varying dispersive materials, distributed thin-film multilayer coatings, and techniques that leverage the polarization-dependent phase shift of total-internal reflections. Herein we describe two such techniques: sequential thick glass plates and Fresnel rhomb prisms. A viable technique must achieve the achromatic phase shift while simultaneously minimizing the intensity difference, chromatic beam spread and polarization variation between each arm. In this paper we describe the above techniques and report on efforts to design, model, fabricate, align the trades associated with each technique that will lead to an implementations of the most promising one in Goddard's Visible Nulling Coronagraph (VNC).
The importance of proving the null.
Gallistel, C R
2009-04-01
Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? (c) 2009 APA, all rights reserved
D=10 Chiral Tensionless Super p-BRANES
NASA Astrophysics Data System (ADS)
Bozhilov, P.
We consider a model for tensionless (null) super-p-branes with N chiral supersymmetries in ten-dimensional flat space-time. After establishing the symmetries of the action, we give the general solution of the classical equations of motion in a particular gauge. In the case of a null superstring (p=1) we find the general solution in an arbitrary gauge. Then, using a harmonic superspace approach, the initial algebra of first- and second-class constraints is converted into an algebra of Lorentz-covariant, BFV-irreducible, first-class constraints only. The corresponding BRST charge is as for a first rank dynamical system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gascoyne, Andrew, E-mail: a.d.gascoyne@sheffield.ac.uk
2015-03-15
Using a full orbit test particle approach, we analyse the motion of a single proton in the vicinity of magnetic null point configurations which are solutions to the kinematic, steady state, resistive magnetohydrodynamics equations. We consider two magnetic configurations, namely, the sheared and torsional spine reconnection regimes [E. R. Priest and D. I. Pontin, Phys. Plasmas 16, 122101 (2009); P. Wyper and R. Jain, Phys. Plasmas 17, 092902 (2010)]; each produce an associated electric field and thus the possibility of accelerating charged particles to high energy levels, i.e., > MeV, as observed in solar flares [R. P. Lin, Space Sci. Rev. 124,more » 233 (2006)]. The particle's energy gain is strongly dependent on the location of injection and is characterised by the angle of approach β, with optimum angle of approach β{sub opt} as the value of β which produces the maximum energy gain. We examine the topological features of each regime and analyse the effect on the energy gain of the proton. We also calculate the complete Lyapunov spectrum for the considered dynamical systems in order to correctly quantify the chaotic nature of the particle orbits. We find that the sheared model is a good candidate for the acceleration of particles, and for increased shear, we expect a larger population to be accelerated to higher energy levels. In the strong electric field regime (E{sub 0}=1500 V/m), the torsional model produces chaotic particle orbits quantified by the calculation of multiple positive Lyapunov exponents in the spectrum, whereas the sheared model produces chaotic orbits only in the neighbourhood of the null point.« less
High Contrast Vacuum Nuller Testbed (VNT) Contrast, Performance and Null Control
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Petrone, Peter; Mallik, Udayan; Madison, Timothy; Bolcar, Matthew R.
2012-01-01
Herein we report on our contrast assessment and the development, sensing and control of the Vacuum Nuller Testbed to realize a Visible Nulling Coronagraphy (VNC) for exoplanet detection and characterization. Tbe VNC is one of the few approaches that works with filled, segmented and sparse or diluted-aperture telescope systems. It thus spans a range of potential future NASA telescopes and could be flown as a separate instrument on such a future mission. NASA/Goddard Space Flight Center has an established effort to develop VNC technologies, and an incremental sequence of testbeds to advance this approach and its critical technologies. We discuss the development of the vacuum Visible Nulling Coronagraph testbed (VNT). The VNT is an ultra-stable vibration isolated testbed that operates under closed-loop control within a vacuum chamber. It will be used to achieve an incremental sequence of three visible-light nulling milestones with sequentially higher contrasts of 10(exp 8), 10(exp 9) and ideally 10(exp 10) at an inner working angle of 2*lambda/D. The VNT is based on a modified Mach-Zehnder nulling interferometer, with a "W" configuration to accommodate a hex-packed MEMS based deformable mirror, a coherent fiber bundle and achromatic phase shifters. We discuss the laboratory results, optical configuration, critical technologies and the null sensing and control approach.
Visible Nulling Coronagraphy Testbed Development for Exoplanet Detection
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Woodruff, Robert A.; Vasudevan, Gopal; Thompson, Patrick; Chen, Andrew; Petrone, Peter; Booth, Andrew; Madison, Timothy; Bolcar, Matthew;
2010-01-01
Three of the recently completed NASA Astrophysics Strategic Mission Concept (ASMC) studies addressed the feasibility of using a Visible Nulling Coronagraph (VNC) as the prime instrument for exoplanet science. The VNC approach is one of the few approaches that works with filled, segmented and sparse or diluted aperture telescope systems and thus spans the space of potential ASMC exoplanet missions. NASA/Goddard Space Flight Center (GSFC) has a well-established effort to develop VNC technologies and has developed an incremental sequence of VNC testbeds to advance the this approach and the technologies associated with it. Herein we report on the continued development of the vacuum Visible Nulling Coronagraph testbed (VNT). The VNT is an ultra-stable vibration isolated testbed that operates under high bandwidth closed-loop control within a vacuum chamber. It will be used to achieve an incremental sequence of three visible light nulling milestones of sequentially higher contrasts of 10(exp 8) , 10(exp 9) and 10(exp 10) at an inner working angle of 2*lambda/D and ultimately culminate in spectrally broadband (>20%) high contrast imaging. Each of the milestones, one per year, is traceable to one or more of the ASMC studies. The VNT uses a modified Mach-Zehnder nulling interferometer, modified with a modified "W" configuration to accommodate a hex-packed MEMS based deformable mirror, a coherent fiber bundle and achromatic phase shifters. Discussed will be the optical configuration laboratory results, critical technologies and the null sensing and control approach.
The data-driven null models for information dissemination tree in social networks
NASA Astrophysics Data System (ADS)
Zhang, Zhiwei; Wang, Zhenyu
2017-10-01
For the purpose of detecting relatedness and co-occurrence between users, as well as the distribution features of nodes in spreading path of a social network, this paper explores topological characteristics of information dissemination trees (IDT) that can be employed indirectly to probe the information dissemination laws within social networks. Hence, three different null models of IDT are presented in this article, including the statistical-constrained 0-order IDT null model, the random-rewire-broken-edge 0-order IDT null model and the random-rewire-broken-edge 2-order IDT null model. These null models firstly generate the corresponding randomized copy of an actual IDT; then the extended significance profile, which is developed by adding the cascade ratio of information dissemination path, is exploited not only to evaluate degree correlation of two nodes associated with an edge, but also to assess the cascade ratio of different length of information dissemination paths. The experimental correspondences of the empirical analysis for several SinaWeibo IDTs and Twitter IDTs indicate that the IDT null models presented in this paper perform well in terms of degree correlation of nodes and dissemination path cascade ratio, which can be better to reveal the features of information dissemination and to fit the situation of real social networks.
NullSeq: A Tool for Generating Random Coding Sequences with Desired Amino Acid and GC Contents.
Liu, Sophia S; Hockenberry, Adam J; Lancichinetti, Andrea; Jewett, Michael C; Amaral, Luís A N
2016-11-01
The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. In order to accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. While many tools have been developed to create random nucleotide sequences, protein coding sequences are subject to a unique set of constraints that complicates the process of generating appropriate null models. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content for the purpose of hypothesis testing. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content, which we have developed into a python package. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. Furthermore, this approach can easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes as well as more effective engineering of biological systems.
On the Penrose inequality along null hypersurfaces
NASA Astrophysics Data System (ADS)
Mars, Marc; Soria, Alberto
2016-06-01
The null Penrose inequality, i.e. the Penrose inequality in terms of the Bondi energy, is studied by introducing a functional on surfaces and studying its properties along a null hypersurface Ω extending to past null infinity. We prove a general Penrose-type inequality which involves the limit at infinity of the Hawking energy along a specific class of geodesic foliations called Geodesic Asymptotically Bondi (GAB), which are shown to always exist. Whenever this foliation approaches large spheres, this inequality becomes the null Penrose inequality and we recover the results of Ludvigsen-Vickers (1983 J. Phys. A: Math. Gen. 16 3349-53) and Bergqvist (1997 Class. Quantum Grav. 14 2577-83). By exploiting further properties of the functional along general geodesic foliations, we introduce an approach to the null Penrose inequality called the Renormalized Area Method and find a set of two conditions which imply the validity of the null Penrose inequality. One of the conditions involves a limit at infinity and the other a restriction on the spacetime curvature along the flow. We investigate their range of applicability in two particular but interesting cases, namely the shear-free and vacuum case, where the null Penrose inequality is known to hold from the results by Sauter (2008 PhD Thesis Zürich ETH), and the case of null shells propagating in the Minkowski spacetime. Finally, a general inequality bounding the area of the quasi-local black hole in terms of an asymptotic quantity intrinsic of Ω is derived.
Targeted mutant models are common in mechanistic toxicology experiments investigating the absorption, metabolism, distribution, or elimination (ADME) of chemicals from individuals. Key models include those for xenosensing transcription factors and cytochrome P450s (CYP). Here we ...
Prum, Richard O
2010-11-01
The Fisher-inspired, arbitrary intersexual selection models of Lande (1981) and Kirkpatrick (1982), including both stable and unstable equilibrium conditions, provide the appropriate null model for the evolution of traits and preferences by intersexual selection. Like the Hardy–Weinberg equilibrium, the Lande–Kirkpatrick (LK) mechanism arises as an intrinsic consequence of genetic variation in trait and preference in the absence of other evolutionary forces. The LK mechanism is equivalent to other intersexual selection mechanisms in the absence of additional selection on preference and with additional trait-viability and preference-viability correlations equal to zero. The LK null model predicts the evolution of arbitrary display traits that are neither honest nor dishonest, indicate nothing other than mating availability, and lack any meaning or design other than their potential to correspond to mating preferences. The current standard for demonstrating an arbitrary trait is impossible to meet because it requires proof of the null hypothesis. The LK null model makes distinct predictions about the evolvability of traits and preferences. Examples of recent intersexual selection research document the confirmationist pitfalls of lacking a null model. Incorporation of the LK null into intersexual selection will contribute to serious examination of the extent to which natural selection on preferences shapes signals.
Vacuum Nuller Testbed (VNT) Performance, Characterization and Null Control: Progress Report
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Petrone, Peter; Mallik, Udayan; Madison, Timothy; Bolcar, Matthew R.; Noecker, M. Charley; Kendrick, Stephen; Helmbrecht, Michael
2011-01-01
Herein we report on the development. sensing and control and our first results with the Vacuum Nuller Testbed to realize a Visible Nulling Coronagraph (VNC) for exoplanet coronagraphy. The VNC is one of the few approaches that works with filled. segmented and sparse or diluted-aperture telescope systems. It thus spans a range of potential future NASA telescopes and could be Hown as a separate instrument on such a future mission. NASA/Goddard Space Flight Center (GSFC) has a well-established effort to develop VNC technologies. and has developed an incremental sequence of VNC testbeds to advance this approach and the enabling technologies associated with it. We discuss the continued development of the vacuum Visible Nulling Coronagraph testbed (VNT). Tbe VNT is an ultra-stable vibration isolated testbed that operates under closed-loop control within a vacuum chamber. It will be used to achieve an incremental sequence of three visible-light nulling milestones with sequentially higher contrasts of 10(sup 8), 10(sup 9) and ideally 10(sup 10) at an inner working angle of 2*lambda/D. The VNT is based on a modified Mach-Zehnder nulling interferometer, with a "W" configuration to accommodate a hex-packed MEMS based deformable mirror, a coherent fiber bundle and achromatic phase shifters. We discuss the initial laboratory results, the optical configuration, critical technologies and the null sensing and control approach.
Examining speed versus selection in connectivity models using elk migration as an example
Brennan, Angela; Hanks, Ephraim M.; Merkle, Jerod A.; Cole, Eric K.; Dewey, Sarah R.; Courtemanch, Alyson B.; Cross, Paul C.
2018-01-01
ContextLandscape resistance is vital to connectivity modeling and frequently derived from resource selection functions (RSFs). RSFs estimate relative probability of use and tend to focus on understanding habitat preferences during slow, routine animal movements (e.g., foraging). Dispersal and migration, however, can produce rarer, faster movements, in which case models of movement speed rather than resource selection may be more realistic for identifying habitats that facilitate connectivity.ObjectiveTo compare two connectivity modeling approaches applied to resistance estimated from models of movement rate and resource selection.MethodsUsing movement data from migrating elk, we evaluated continuous time Markov chain (CTMC) and movement-based RSF models (i.e., step selection functions [SSFs]). We applied circuit theory and shortest random path (SRP) algorithms to CTMC, SSF and null (i.e., flat) resistance surfaces to predict corridors between elk seasonal ranges. We evaluated prediction accuracy by comparing model predictions to empirical elk movements.ResultsAll connectivity models predicted elk movements well, but models applied to CTMC resistance were more accurate than models applied to SSF and null resistance. Circuit theory models were more accurate on average than SRP models.ConclusionsCTMC can be more realistic than SSFs for estimating resistance for fast movements, though SSFs may demonstrate some predictive ability when animals also move slowly through corridors (e.g., stopover use during migration). High null model accuracy suggests seasonal range data may also be critical for predicting direct migration routes. For animals that migrate or disperse across large landscapes, we recommend incorporating CTMC into the connectivity modeling toolkit.
Králová-Hromadová, Ivica; Minárik, Gabriel; Bazsalovicsová, Eva; Mikulíček, Peter; Oravcová, Alexandra; Pálková, Lenka; Hanzelová, Vladimíra
2015-02-01
Caryophyllaeus laticeps (Pallas 1781) (Cestoda: Caryophyllidea) is a monozoic tapeworm of cyprinid fishes with a distribution area that includes Europe, most of the Palaearctic Asia and northern Africa. Broad geographic distribution, wide range of definitive fish hosts and recently revealed high morphological plasticity of the parasite, which is not in an agreement with molecular findings, make this species to be an interesting model for population biology studies. Microsatellites (short tandem repeat (STR) markers), as predominant markers for population genetics, were designed for C. laticeps using a next-generation sequencing (NGS) approach. Out of 165 marker candidates, 61 yielded PCR products of the expected size and in 25 of the candidates a declared repetitive motif was confirmed by Sanger sequencing. After the fragment analysis, six loci were proved to be polymorphic and tested for heterozygosity, Hardy-Weinberg equilibrium and the presence of null alleles on 59 individuals coming from three geographically widely separated populations (Slovakia, Russia and UK). The number of alleles in particular loci and populations ranged from two to five. Significant deficit of heterozygotes and the presence of null alleles were found in one locus in all three populations. Other loci showed deviations from Hardy-Weinberg equilibrium and the presence of null alleles only in some populations. In spite of relatively low polymorphism and the potential presence of null alleles, newly developed microsatellites may be applied as suitable markers in population genetic studies of C. laticeps.
Three Strategies for the Critical Use of Statistical Methods in Psychological Research
ERIC Educational Resources Information Center
Campitelli, Guillermo; Macbeth, Guillermo; Ospina, Raydonal; Marmolejo-Ramos, Fernando
2017-01-01
We present three strategies to replace the null hypothesis statistical significance testing approach in psychological research: (1) visual representation of cognitive processes and predictions, (2) visual representation of data distributions and choice of the appropriate distribution for analysis, and (3) model comparison. The three strategies…
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
Examining speed versus selection in connectivity models using elk migration as an example
Brennan, Angela; Hanks, EM; Merkle, JA; Cole, EK; Dewey, SR; Courtemanch, AB; Cross, Paul C.
2018-01-01
Context: Landscape resistance is vital to connectivity modeling and frequently derived from resource selection functions (RSFs). RSFs estimate relative probability of use and tend to focus on understanding habitat preferences during slow, routine animal movements (e.g., foraging). Dispersal and migration, however, can produce rarer, faster movements, in which case models of movement speed rather than resource selection may be more realistic for identifying habitats that facilitate connectivity. Objective: To compare two connectivity modeling approaches applied to resistance estimated from models of movement rate and resource selection. Methods: Using movement data from migrating elk, we evaluated continuous time Markov chain (CTMC) and movement-based RSF models (i.e., step selection functions [SSFs]). We applied circuit theory and shortest random path (SRP) algorithms to CTMC, SSF and null (i.e., flat) resistance surfaces to predict corridors between elk seasonal ranges. We evaluated prediction accuracy by comparing model predictions to empirical elk movements. Results: All models predicted elk movements well, but models applied to CTMC resistance were more accurate than models applied to SSF and null resistance. Circuit theory models were more accurate on average than SRP algorithms. Conclusions: CTMC can be more realistic than SSFs for estimating resistance for fast movements, though SSFs may demonstrate some predictive ability when animals also move slowly through corridors (e.g., stopover use during migration). High null model accuracy suggests seasonal range data may also be critical for predicting direct migration routes. For animals that migrate or disperse across large landscapes, we recommend incorporating CTMC into the connectivity modeling toolkit.
High Contrast Vacuum Nuller Testbed (VNT) Contrast, Performance and Null Control
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Petrone, Peter; Mallik, Udayan; Madison, Timothy; Bolcar, Matthew R.
2012-01-01
Herein we report on our Visible Nulling Coronagraph high-contrast result of 109 contrast averaged over a focal planeregion extending from 14 D with the Vacuum Nuller Testbed (VNT) in a vibration isolated vacuum chamber. TheVNC is a hybrid interferometriccoronagraphic approach for exoplanet science. It operates with high Lyot stopefficiency for filled, segmented and sparse or diluted-aperture telescopes, thereby spanning the range of potential futureNASA flight telescopes. NASAGoddard Space Flight Center (GSFC) has a well-established effort to develop the VNCand its technologies, and has developed an incremental sequence of VNC testbeds to advance this approach and itsenabling technologies. These testbeds have enabled advancement of high-contrast, visible light, nulling interferometry tounprecedented levels. The VNC is based on a modified Mach-Zehnder nulling interferometer, with a W configurationto accommodate a hex-packed MEMS based deformable mirror, a coherent fiber bundle and achromatic phase shifters.We give an overview of the VNT and discuss the high-contrast laboratory results, the optical configuration, criticaltechnologies and null sensing and control.
Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...
A Gaussian Mixture Model for Nulling Pulsars
NASA Astrophysics Data System (ADS)
Kaplan, D. L.; Swiggum, J. K.; Fichtenbauer, T. D. J.; Vallisneri, M.
2018-03-01
The phenomenon of pulsar nulling—where pulsars occasionally turn off for one or more pulses—provides insight into pulsar-emission mechanisms and the processes by which pulsars turn off when they cross the “death line.” However, while ever more pulsars are found that exhibit nulling behavior, the statistical techniques used to measure nulling are biased, with limited utility and precision. In this paper, we introduce an improved algorithm, based on Gaussian mixture models, for measuring pulsar nulling behavior. We demonstrate this algorithm on a number of pulsars observed as part of a larger sample of nulling pulsars, and show that it performs considerably better than existing techniques, yielding better precision and no bias. We further validate our algorithm on simulated data. Our algorithm is widely applicable to a large number of pulsars even if they do not show obvious nulls. Moreover, it can be used to derive nulling probabilities of nulling for individual pulses, which can be used for in-depth studies.
A Minimalist Approach to Null Subjects and Objects in Second Language Acquisition
ERIC Educational Resources Information Center
Park, H.
2004-01-01
Studies of the second language acquisition of pronominal arguments have observed that: (1) L1 speakers of null subject languages of the Spanish type drop more subjects in their second language (L2) English than first language (L1) speakers of null subject languages of the Korean type and (2) speakers of Korean-type languages drop more objects than…
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Williams, Hefin Wyn; Cross, Dónall Eoin; Crump, Heather Louise; Drost, Cornelis Jan; Thomas, Christopher James
2015-08-28
There is increasing evidence that the geographic distribution of tick species is changing. Whilst correlative Species Distribution Models (SDMs) have been used to predict areas that are potentially suitable for ticks, models have often been assessed without due consideration for spatial patterns in the data that may inflate the influence of predictor variables on species distributions. This study used null models to rigorously evaluate the role of climate and the potential for climate change to affect future climate suitability for eight European tick species, including several important disease vectors. We undertook a comparative assessment of the performance of Maxent and Mahalanobis Distance SDMs based on observed data against those of null models based on null species distributions or null climate data. This enabled the identification of species whose distributions demonstrate a significant association with climate variables. Latest generation (AR5) climate projections were subsequently used to project future climate suitability under four Representative Concentration Pathways (RCPs). Seven out of eight tick species exhibited strong climatic signals within their observed distributions. Future projections intimate varying degrees of northward shift in climate suitability for these tick species, with the greatest shifts forecasted under the most extreme RCPs. Despite the high performance measure obtained for the observed model of Hyalomma lusitanicum, it did not perform significantly better than null models; this may result from the effects of non-climatic factors on its distribution. By comparing observed SDMs with null models, our results allow confidence that we have identified climate signals in tick distributions that are not simply a consequence of spatial patterns in the data. Observed climate-driven SDMs for seven out of eight species performed significantly better than null models, demonstrating the vulnerability of these tick species to the effects of climate change in the future.
Testing for nonlinearity in time series: The method of surrogate data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theiler, J.; Galdrikian, B.; Longtin, A.
1991-01-01
We describe a statistical approach for identifying nonlinearity in time series; in particular, we want to avoid claims of chaos when simpler models (such as linearly correlated noise) can explain the data. The method requires a careful statement of the null hypothesis which characterizes a candidate linear process, the generation of an ensemble of surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against themore » null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. We present algorithms for generating surrogate data under various null hypotheses, and we show the results of numerical experiments on artificial data using correlation dimension, Lyapunov exponent, and forecasting error as discriminating statistics. Finally, we consider a number of experimental time series -- including sunspots, electroencephalogram (EEG) signals, and fluid convection -- and evaluate the statistical significance of the evidence for nonlinear structure in each case. 56 refs., 8 figs.« less
A new modeling and inference approach for the Systolic Blood Pressure Intervention Trial outcomes.
Yang, Song; Ambrosius, Walter T; Fine, Lawrence J; Bress, Adam P; Cushman, William C; Raj, Dominic S; Rehman, Shakaib; Tamariz, Leonardo
2018-06-01
Background/aims In clinical trials with time-to-event outcomes, usually the significance tests and confidence intervals are based on a proportional hazards model. Thus, the temporal pattern of the treatment effect is not directly considered. This could be problematic if the proportional hazards assumption is violated, as such violation could impact both interim and final estimates of the treatment effect. Methods We describe the application of inference procedures developed recently in the literature for time-to-event outcomes when the treatment effect may or may not be time-dependent. The inference procedures are based on a new model which contains the proportional hazards model as a sub-model. The temporal pattern of the treatment effect can then be expressed and displayed. The average hazard ratio is used as the summary measure of the treatment effect. The test of the null hypothesis uses adaptive weights that often lead to improvement in power over the log-rank test. Results Without needing to assume proportional hazards, the new approach yields results consistent with previously published findings in the Systolic Blood Pressure Intervention Trial. It provides a visual display of the time course of the treatment effect. At four of the five scheduled interim looks, the new approach yields smaller p values than the log-rank test. The average hazard ratio and its confidence interval indicates a treatment effect nearly a year earlier than a restricted mean survival time-based approach. Conclusion When the hazards are proportional between the comparison groups, the new methods yield results very close to the traditional approaches. When the proportional hazards assumption is violated, the new methods continue to be applicable and can potentially be more sensitive to departure from the null hypothesis.
Interpreting null findings from trials of alcohol brief interventions.
Heather, Nick
2014-01-01
The effectiveness of alcohol brief intervention (ABI) has been established by a succession of meta-analyses but, because the effects of ABI are small, null findings from randomized controlled trials are often reported and can sometimes lead to skepticism regarding the benefits of ABI in routine practice. This article first explains why null findings are likely to occur under null hypothesis significance testing (NHST) due to the phenomenon known as "the dance of the p-values." A number of misconceptions about null findings are then described, using as an example the way in which the results of the primary care arm of a recent cluster-randomized trial of ABI in England (the SIPS project) have been misunderstood. These misinterpretations include the fallacy of "proving the null hypothesis" that lack of a significant difference between the means of sample groups can be taken as evidence of no difference between their population means, and the possible effects of this and related misunderstandings of the SIPS findings are examined. The mistaken inference that reductions in alcohol consumption seen in control groups from baseline to follow-up are evidence of real effects of control group procedures is then discussed and other possible reasons for such reductions, including regression to the mean, research participation effects, historical trends, and assessment reactivity, are described. From the standpoint of scientific progress, the chief problem about null findings under the conventional NHST approach is that it is not possible to distinguish "evidence of absence" from "absence of evidence." By contrast, under a Bayesian approach, such a distinction is possible and it is explained how this approach could classify ABIs in particular settings or among particular populations as either truly ineffective or as of unknown effectiveness, thus accelerating progress in the field of ABI research.
Spacelike matching to null infinity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zenginoglu, Anil; Tiglio, Manuel
2009-07-15
We present two methods to include the asymptotic domain of a background spacetime in null directions for numerical solutions of evolution equations so that both the radiation extraction problem and the outer boundary problem are solved. The first method is based on the geometric conformal approach, the second is a coordinate based approach. We apply these methods to the case of a massless scalar wave equation on a Kerr spacetime. Our methods are designed to allow existing codes to reach the radiative zone by including future null infinity in the computational domain with relatively minor modifications. We demonstrate the flexibilitymore » of the methods by considering both Boyer-Lindquist and ingoing Kerr coordinates near the black hole. We also confirm numerically predictions concerning tail decay rates for scalar fields at null infinity in Kerr spacetime due to Hod for the first time.« less
A null model for Pearson coexpression networks.
Gobbi, Andrea; Jurman, Giuseppe
2015-01-01
Gene coexpression networks inferred by correlation from high-throughput profiling such as microarray data represent simple but effective structures for discovering and interpreting linear gene relationships. In recent years, several approaches have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is most crucial when the number of samples is small, yielding a non-negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The threshold is theoretically derived by means of an analytic approach and, as a deterministic independent null model, it depends only on the dimensions of the starting data matrix, with assumptions on the skewness of the data distribution compatible with the structure of gene expression levels data. We show, on synthetic and array datasets, that the proposed threshold is effective in eliminating all false positive links, with an offsetting cost in terms of false negative detected edges.
DAMA confronts null searches in the effective theory of dark matter-nucleon interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catena, Riccardo; Ibarra, Alejandro; Wild, Sebastian
2016-05-17
We examine the dark matter interpretation of the modulation signal reported by the DAMA experiment from the perspective of effective field theories displaying Galilean invariance. We consider the most general effective coupling leading to the elastic scattering of a dark matter particle with spin 0 or 1/2 off a nucleon, and we analyze the compatibility of the DAMA signal with the null results from other direct detection experiments, as well as with the non-observation of a high energy neutrino flux in the direction of the Sun from dark matter annihilation. To this end, we develop a novel semi-analytical approach formore » comparing experimental results in the high-dimensional parameter space of the non-relativistic effective theory. Assuming the standard halo model, we find a strong tension between the dark matter interpretation of the DAMA modulation signal and the null result experiments. We also list possible ways-out of this conclusion.« less
Huang, Jian-Xiong; Zhang, Jian; Shen, Yong; Lian, Ju-yu; Cao, Hong-lin; Ye, Wan-hui; Wu, Lin-fang; Bin, Yue
2014-01-01
Ecologists have been monitoring community dynamics with the purpose of understanding the rates and causes of community change. However, there is a lack of monitoring of community dynamics from the perspective of phylogeny. We attempted to understand temporal phylogenetic turnover in a 50 ha tropical forest (Barro Colorado Island, BCI) and a 20 ha subtropical forest (Dinghushan in southern China, DHS). To obtain temporal phylogenetic turnover under random conditions, two null models were used. The first shuffled names of species that are widely used in community phylogenetic analyses. The second simulated demographic processes with careful consideration on the variation in dispersal ability among species and the variations in mortality both among species and among size classes. With the two models, we tested the relationships between temporal phylogenetic turnover and phylogenetic similarity at different spatial scales in the two forests. Results were more consistent with previous findings using the second null model suggesting that the second null model is more appropriate for our purposes. With the second null model, a significantly positive relationship was detected between phylogenetic turnover and phylogenetic similarity in BCI at a 10 m×10 m scale, potentially indicating phylogenetic density dependence. This relationship in DHS was significantly negative at three of five spatial scales. This could indicate abiotic filtering processes for community assembly. Using variation partitioning, we found phylogenetic similarity contributed to variation in temporal phylogenetic turnover in the DHS plot but not in BCI plot. The mechanisms for community assembly in BCI and DHS vary from phylogenetic perspective. Only the second null model detected this difference indicating the importance of choosing a proper null model.
Alignment-free sequence comparison (II): theoretical power of comparison statistics.
Wan, Lin; Reinert, Gesine; Sun, Fengzhu; Waterman, Michael S
2010-11-01
Rapid methods for alignment-free sequence comparison make large-scale comparisons between sequences increasingly feasible. Here we study the power of the statistic D2, which counts the number of matching k-tuples between two sequences, as well as D2*, which uses centralized counts, and D2S, which is a self-standardized version, both from a theoretical viewpoint and numerically, providing an easy to use program. The power is assessed under two alternative hidden Markov models; the first one assumes that the two sequences share a common motif, whereas the second model is a pattern transfer model; the null model is that the two sequences are composed of independent and identically distributed letters and they are independent. Under the first alternative model, the means of the tuple counts in the individual sequences change, whereas under the second alternative model, the marginal means are the same as under the null model. Using the limit distributions of the count statistics under the null and the alternative models, we find that generally, asymptotically D2S has the largest power, followed by D2*, whereas the power of D2 can even be zero in some cases. In contrast, even for sequences of length 140,000 bp, in simulations D2* generally has the largest power. Under the first alternative model of a shared motif, the power of D2*approaches 100% when sufficiently many motifs are shared, and we recommend the use of D2* for such practical applications. Under the second alternative model of pattern transfer,the power for all three count statistics does not increase with sequence length when the sequence is sufficiently long, and hence none of the three statistics under consideration canbe recommended in such a situation. We illustrate the approach on 323 transcription factor binding motifs with length at most 10 from JASPAR CORE (October 12, 2009 version),verifying that D2* is generally more powerful than D2. The program to calculate the power of D2, D2* and D2S can be downloaded from http://meta.cmb.usc.edu/d2. Supplementary Material is available at www.liebertonline.com/cmb.
Experimental evaluation of achromatic phase shifters for mid-infrared starlight suppression.
Gappinger, Robert O; Diaz, Rosemary T; Ksendzov, Alexander; Lawson, Peter R; Lay, Oliver P; Liewer, Kurt M; Loya, Frank M; Martin, Stefan R; Serabyn, Eugene; Wallace, James K
2009-02-10
Phase shifters are a key component of nulling interferometry, one of the potential routes to enabling the measurement of faint exoplanet spectra. Here, three different achromatic phase shifters are evaluated experimentally in the mid-infrared, where such nulling interferometers may someday operate. The methods evaluated include the use of dispersive glasses, a through-focus field inversion, and field reversals on reflection from antisymmetric flat-mirror periscopes. All three approaches yielded deep, broadband, mid-infrared nulls, but the deepest broadband nulls were obtained with the periscope architecture. In the periscope system, average null depths of 4x10(-5) were obtained with a 25% bandwidth, and 2x10(-5) with a 20% bandwidth, at a central wavelength of 9.5 mum. The best short term nulls at 20% bandwidth were approximately 9x10(-6), in line with error budget predictions and the limits of the current generation of hardware.
A null model for microbial diversification
Straub, Timothy J.
2017-01-01
Whether prokaryotes (Bacteria and Archaea) are naturally organized into phenotypically and genetically cohesive units comparable to animal or plant species remains contested, frustrating attempts to estimate how many such units there might be, or to identify the ecological roles they play. Analyses of gene sequences in various closely related prokaryotic groups reveal that sequence diversity is typically organized into distinct clusters, and processes such as periodic selection and extensive recombination are understood to be drivers of cluster formation (“speciation”). However, observed patterns are rarely compared with those obtainable with simple null models of diversification under stochastic lineage birth and death and random genetic drift. Via a combination of simulations and analyses of core and phylogenetic marker genes, we show that patterns of diversity for the genera Escherichia, Neisseria, and Borrelia are generally indistinguishable from patterns arising under a null model. We suggest that caution should thus be taken in interpreting observed clustering as a result of selective evolutionary forces. Unknown forces do, however, appear to play a role in Helicobacter pylori, and some individual genes in all groups fail to conform to the null model. Taken together, we recommend the presented birth−death model as a null hypothesis in prokaryotic speciation studies. It is only when the real data are statistically different from the expectations under the null model that some speciation process should be invoked. PMID:28630293
Integrated Optics Achromatic Nuller for Stellar Interferometry
NASA Technical Reports Server (NTRS)
Ksendzov, Alexander
2012-01-01
This innovation will replace a beam combiner, a phase shifter, and a mode conditioner, thus simplifying the system design and alignment, and saving weight and space in future missions. This nuller is a dielectric-waveguide-based, four-port asymmetric coupler. Its nulling performance is based on the mode-sorting property of adiabatic asymmetric couplers that are intrinsically achromatic. This nuller has been designed, and its performance modeled, in the 6.5-micrometer to 9.25-micrometer spectral interval (36% bandwidth). The calculated suppression of starlight for this 15-cm-long device is 10(exp -5) or better through the whole bandwidth. This is enough to satisfy requirements of a flagship exoplanet-characterization mission. Nulling interferometry is an approach to starlight suppression that will allow the detection and spectral characterization of Earth-like exoplanets. Nulling interferometers separate the light originating from a dim planet from the bright starlight by placing the star at the bottom of a deep, destructive interference fringe, where the starlight is effectively cancelled, or nulled, thus allowing the faint off-axis light to be much more easily seen. This process is referred to as nulling of the starlight. Achromatic nulling technology is a critical component that provides the starlight suppression in interferometer-based observatories. Previously considered space-based interferometers are aimed at approximately 6-to-20-micrometer spectral range. While containing the spectral features of many gases that are considered to be signatures of life, it also offers better planet-to-star brightness ratio than shorter wavelengths. In the Integrated Optics Achromatic Nuller (IOAN) device, the two beams from the interferometer's collecting telescopes pass through the same focusing optic and are incident on the input of the nuller.
Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.
O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao
2017-07-01
Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.
Interpreting Null Findings from Trials of Alcohol Brief Interventions
Heather, Nick
2014-01-01
The effectiveness of alcohol brief intervention (ABI) has been established by a succession of meta-analyses but, because the effects of ABI are small, null findings from randomized controlled trials are often reported and can sometimes lead to skepticism regarding the benefits of ABI in routine practice. This article first explains why null findings are likely to occur under null hypothesis significance testing (NHST) due to the phenomenon known as “the dance of the p-values.” A number of misconceptions about null findings are then described, using as an example the way in which the results of the primary care arm of a recent cluster-randomized trial of ABI in England (the SIPS project) have been misunderstood. These misinterpretations include the fallacy of “proving the null hypothesis” that lack of a significant difference between the means of sample groups can be taken as evidence of no difference between their population means, and the possible effects of this and related misunderstandings of the SIPS findings are examined. The mistaken inference that reductions in alcohol consumption seen in control groups from baseline to follow-up are evidence of real effects of control group procedures is then discussed and other possible reasons for such reductions, including regression to the mean, research participation effects, historical trends, and assessment reactivity, are described. From the standpoint of scientific progress, the chief problem about null findings under the conventional NHST approach is that it is not possible to distinguish “evidence of absence” from “absence of evidence.” By contrast, under a Bayesian approach, such a distinction is possible and it is explained how this approach could classify ABIs in particular settings or among particular populations as either truly ineffective or as of unknown effectiveness, thus accelerating progress in the field of ABI research. PMID:25076917
Suggestions for presenting the results of data analyses
Anderson, David R.; Link, William A.; Johnson, Douglas H.; Burnham, Kenneth P.
2001-01-01
We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentists methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management.
The appearance, motion, and disappearance of three-dimensional magnetic null points
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, Nicholas A., E-mail: namurphy@cfa.harvard.edu; Parnell, Clare E.; Haynes, Andrew L.
2015-10-15
While theoretical models and simulations of magnetic reconnection often assume symmetry such that the magnetic null point when present is co-located with a flow stagnation point, the introduction of asymmetry typically leads to non-ideal flows across the null point. To understand this behavior, we present exact expressions for the motion of three-dimensional linear null points. The most general expression shows that linear null points move in the direction along which the magnetic field and its time derivative are antiparallel. Null point motion in resistive magnetohydrodynamics results from advection by the bulk plasma flow and resistive diffusion of the magnetic field,more » which allows non-ideal flows across topological boundaries. Null point motion is described intrinsically by parameters evaluated locally; however, global dynamics help set the local conditions at the null point. During a bifurcation of a degenerate null point into a null-null pair or the reverse, the instantaneous velocity of separation or convergence of the null-null pair will typically be infinite along the null space of the Jacobian matrix of the magnetic field, but with finite components in the directions orthogonal to the null space. Not all bifurcating null-null pairs are connected by a separator. Furthermore, except under special circumstances, there will not exist a straight line separator connecting a bifurcating null-null pair. The motion of separators cannot be described using solely local parameters because the identification of a particular field line as a separator may change as a result of non-ideal behavior elsewhere along the field line.« less
NASA Technical Reports Server (NTRS)
Hicks, Brian A.; Lyon, Richard G.; Petrone, Peter, III; Bolcar, Matthew R.; Bolognese, Jeff; Clampin, Mark; Dogoda, Peter; Dworzanski, Daniel; Helmbrecht, Michael A.; Koca, Corina;
2016-01-01
This work presents an overview of the This work presents an overview of the Segmented Aperture Interferometric Nulling Testbed (SAINT), a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes., a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes.
Broken chiral symmetry on a null plane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beane, Silas R., E-mail: silas@physics.unh.edu
2013-10-15
On a null-plane (light-front), all effects of spontaneous chiral symmetry breaking are contained in the three Hamiltonians (dynamical Poincaré generators), while the vacuum state is a chiral invariant. This property is used to give a general proof of Goldstone’s theorem on a null-plane. Focusing on null-plane QCD with N degenerate flavors of light quarks, the chiral-symmetry breaking Hamiltonians are obtained, and the role of vacuum condensates is clarified. In particular, the null-plane Gell-Mann–Oakes–Renner formula is derived, and a general prescription is given for mapping all chiral-symmetry breaking QCD condensates to chiral-symmetry conserving null-plane QCD condensates. The utility of the null-planemore » description lies in the operator algebra that mixes the null-plane Hamiltonians and the chiral symmetry charges. It is demonstrated that in a certain non-trivial limit, the null-plane operator algebra reduces to the symmetry group SU(2N) of the constituent quark model. -- Highlights: •A proof (the first) of Goldstone’s theorem on a null-plane is given. •The puzzle of chiral-symmetry breaking condensates on a null-plane is solved. •The emergence of spin-flavor symmetries in null-plane QCD is demonstrated.« less
NASA Astrophysics Data System (ADS)
Hicks, Brian A.; Lyon, Richard G.; Petrone, Peter; Ballard, Marlin; Bolcar, Matthew R.; Bolognese, Jeff; Clampin, Mark; Dogoda, Peter; Dworzanski, Daniel; Helmbrecht, Michael A.; Koca, Corina; Shiri, Ron
2016-07-01
This work presents an overview of the Segmented Aperture Interferometric Nulling Testbed (SAINT), a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNC's demonstrated wavefront sensing and control system to refine and quantify end-to-end high-contrast starlight suppression performance. This pathfinder testbed will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes.
Projecting adverse event incidence rates using empirical Bayes methodology.
Ma, Guoguang Julie; Ganju, Jitendra; Huang, Jing
2016-08-01
Although there is considerable interest in adverse events observed in clinical trials, projecting adverse event incidence rates in an extended period can be of interest when the trial duration is limited compared to clinical practice. A naïve method for making projections might involve modeling the observed rates into the future for each adverse event. However, such an approach overlooks the information that can be borrowed across all the adverse event data. We propose a method that weights each projection using a shrinkage factor; the adverse event-specific shrinkage is a probability, based on empirical Bayes methodology, estimated from all the adverse event data, reflecting evidence in support of the null or non-null hypotheses. Also proposed is a technique to estimate the proportion of true nulls, called the common area under the density curves, which is a critical step in arriving at the shrinkage factor. The performance of the method is evaluated by projecting from interim data and then comparing the projected results with observed results. The method is illustrated on two data sets. © The Author(s) 2013.
Bennett, Bradley C; Husby, Chad E
2008-03-28
Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.
Divertor with a third-order null of the poloidal field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryutov, D. D.; Umansky, M. V.
2013-09-15
A concept and preliminary feasibility analysis of a divertor with the third-order poloidal field null is presented. The third-order null is the point where not only the field itself but also its first and second spatial derivatives are zero. In this case, the separatrix near the null-point has eight branches, and the number of strike-points increases from 2 (as in the standard divertor) to six. It is shown that this magnetic configuration can be created by a proper adjustment of the currents in a set of three divertor coils. If the currents are somewhat different from the required values, themore » configuration becomes that of three closely spaced first-order nulls. Analytic approach, suitable for a quick orientation in the problem, is used. Potential advantages and disadvantages of this configuration are briefly discussed.« less
A model of the normal and null states of pulsars
NASA Astrophysics Data System (ADS)
Jones, P. B.
1981-12-01
A solvable three-dimensional polar cap model of pair creation and charged particle acceleration has been derived. There are no free parameters of significance apart from the polar surface magnetic flux density. The parameter determining the acceleration potential difference has been obtained by calculation of elementary nuclear and electromagnetic processes. Solutions of the model exist for both normal and null states of a pulsar, and the instability in the normal state leading to the normal to null transition has been identified. The predicted necessary condition for the transition is entirely consistent with observation.
A model of the normal and null states of pulsars
NASA Astrophysics Data System (ADS)
Jones, P. B.
A solvable three dimensional polar cap model of pair creation and charged particle acceleration is derived. There are no free parameters of significance apart from the polar surface magnetic flux density. The parameter CO determining the acceleration potential difference was obtained by calculation of elementary nuclear and electromagnetic processes. Solutions of the model exist for both normal and null states of a pulsar, and the instability in the normal state leading to the normal to null transition is identified. The predicted necessary condition for the transition is entirely consistent with observation.
[Dilemma of null hypothesis in ecological hypothesis's experiment test.
Li, Ji
2016-06-01
Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.
NASA Astrophysics Data System (ADS)
Hilditch, David; Harms, Enno; Bugner, Marcus; Rüter, Hannes; Brügmann, Bernd
2018-03-01
A long-standing problem in numerical relativity is the satisfactory treatment of future null-infinity. We propose an approach for the evolution of hyperboloidal initial data in which the outer boundary of the computational domain is placed at infinity. The main idea is to apply the ‘dual foliation’ formalism in combination with hyperboloidal coordinates and the generalized harmonic gauge formulation. The strength of the present approach is that, following the ideas of Zenginoğlu, a hyperboloidal layer can be naturally attached to a central region using standard coordinates of numerical relativity applications. Employing a generalization of the standard hyperboloidal slices, developed by Calabrese et al, we find that all formally singular terms take a trivial limit as we head to null-infinity. A byproduct is a numerical approach for hyperboloidal evolution of nonlinear wave equations violating the null-condition. The height-function method, used often for fixed background spacetimes, is generalized in such a way that the slices can be dynamically ‘waggled’ to maintain the desired outgoing coordinate lightspeed precisely. This is achieved by dynamically solving the eikonal equation. As a first numerical test of the new approach we solve the 3D flat space scalar wave equation. The simulations, performed with the pseudospectral bamps code, show that outgoing waves are cleanly absorbed at null-infinity and that errors converge away rapidly as resolution is increased.
Farias, Ariel A; Jaksic, Fabian M
2007-03-01
1. Within mainstream ecological literature, functional structure has been viewed as resulting from the interplay of species interactions, resource levels and environmental variability. Classical models state that interspecific competition generates species segregation and guild formation in stable saturated environments, whereas opportunism causes species aggregation on abundant resources in variable unsaturated situations. 2. Nevertheless, intrinsic functional constraints may result in species-specific differences in resource-use capabilities. This could force some degree of functional structure without assuming other putative causes. However, the influence of such constraints has rarely been tested, and their relative contribution to observed patterns has not been quantified. 3. We used a multiple null-model approach to quantify the magnitude and direction (non-random aggregation or divergence) of the functional structure of a vertebrate predator assemblage exposed to variable prey abundance over an 18-year period. Observed trends were contrasted with predictions from null-models designed in an orthogonal fashion to account independently for the effects of functional constraints and opportunism. Subsequently, the unexplained variation was regressed against environmental variables to search for evidence of interspecific competition. 4. Overall, null-models accounting for functional constraints showed the best fit to the observed data, and suggested an effect of this factor in modulating predator opportunistic responses. However, regression models on residual variation indicated that such an effect was dependent on both total and relative abundance of principal (small mammals) and alternative (arthropods, birds, reptiles) prey categories. 5. In addition, no clear evidence for interspecific competition was found, but differential delays in predator functional responses could explain some of the unaccounted variation. Thus, we call for caution when interpreting empirical data in the context of classical models assuming synchronous responses of consumers to resource levels.
Null-space and statistical significance of first-arrival traveltime inversion
NASA Astrophysics Data System (ADS)
Morozov, Igor B.
2004-03-01
The strong uncertainty inherent in the traveltime inversion of first arrivals from surface sources is usually removed by using a priori constraints or regularization. This leads to the null-space (data-independent model variability) being inadequately sampled, and consequently, model uncertainties may be underestimated in traditional (such as checkerboard) resolution tests. To measure the full null-space model uncertainties, we use unconstrained Monte Carlo inversion and examine the statistics of the resulting model ensembles. In an application to 1-D first-arrival traveltime inversion, the τ-p method is used to build a set of models that are equivalent to the IASP91 model within small, ~0.02 per cent, time deviations. The resulting velocity variances are much larger, ~2-3 per cent within the regions above the mantle discontinuities, and are interpreted as being due to the null-space. Depth-variant depth averaging is required for constraining the velocities within meaningful bounds, and the averaging scalelength could also be used as a measure of depth resolution. Velocity variances show structure-dependent, negative correlation with the depth-averaging scalelength. Neither the smoothest (Herglotz-Wiechert) nor the mean velocity-depth functions reproduce the discontinuities in the IASP91 model; however, the discontinuities can be identified by the increased null-space velocity (co-)variances. Although derived for a 1-D case, the above conclusions also relate to higher dimensions.
In vivo time-gated diffuse correlation spectroscopy at quasi-null source-detector separation.
Pagliazzi, M; Sekar, S Konugolu Venkata; Di Sieno, L; Colombo, L; Durduran, T; Contini, D; Torricelli, A; Pifferi, A; Mora, A Dalla
2018-06-01
We demonstrate time domain diffuse correlation spectroscopy at quasi-null source-detector separation by using a fast time-gated single-photon avalanche diode without the need of time-tagging electronics. This approach allows for increased photon collection, simplified real-time instrumentation, and reduced probe dimensions. Depth discriminating, quasi-null distance measurement of blood flow in a human subject is presented. We envision the miniaturization and integration of matrices of optical sensors of increased spatial resolution and the enhancement of the contrast of local blood flow changes.
Parallel Reconstruction Using Null Operations (PRUNO)
Zhang, Jian; Liu, Chunlei; Moseley, Michael E.
2011-01-01
A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290
Unwinding the hairball graph: Pruning algorithms for weighted complex networks
NASA Astrophysics Data System (ADS)
Dianati, Navid
2016-01-01
Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.
Thou Shalt Not Bear False Witness against Null Hypothesis Significance Testing
ERIC Educational Resources Information Center
García-Pérez, Miguel A.
2017-01-01
Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…
Magliocca, Nicholas R; Brown, Daniel G; Ellis, Erle C
2014-01-01
Local changes in land use result from the decisions and actions of land-users within land systems, which are structured by local and global environmental, economic, political, and cultural contexts. Such cross-scale causation presents a major challenge for developing a general understanding of how local decision-making shapes land-use changes at the global scale. This paper implements a generalized agent-based model (ABM) as a virtual laboratory to explore how global and local processes influence the land-use and livelihood decisions of local land-users, operationalized as settlement-level agents, across the landscapes of six real-world test sites. Test sites were chosen in USA, Laos, and China to capture globally-significant variation in population density, market influence, and environmental conditions, with land systems ranging from swidden to commercial agriculture. Publicly available global data were integrated into the ABM to model cross-scale effects of economic globalization on local land-use decisions. A suite of statistics was developed to assess the accuracy of model-predicted land-use outcomes relative to observed and random (i.e. null model) landscapes. At four of six sites, where environmental and demographic forces were important constraints on land-use choices, modeled land-use outcomes were more similar to those observed across sites than the null model. At the two sites in which market forces significantly influenced land-use and livelihood decisions, the model was a poorer predictor of land-use outcomes than the null model. Model successes and failures in simulating real-world land-use patterns enabled the testing of hypotheses on land-use decision-making and yielded insights on the importance of missing mechanisms. The virtual laboratory approach provides a practical framework for systematic improvement of both theory and predictive skill in land change science based on a continual process of experimentation and model enhancement.
Magliocca, Nicholas R.; Brown, Daniel G.; Ellis, Erle C.
2014-01-01
Local changes in land use result from the decisions and actions of land-users within land systems, which are structured by local and global environmental, economic, political, and cultural contexts. Such cross-scale causation presents a major challenge for developing a general understanding of how local decision-making shapes land-use changes at the global scale. This paper implements a generalized agent-based model (ABM) as a virtual laboratory to explore how global and local processes influence the land-use and livelihood decisions of local land-users, operationalized as settlement-level agents, across the landscapes of six real-world test sites. Test sites were chosen in USA, Laos, and China to capture globally-significant variation in population density, market influence, and environmental conditions, with land systems ranging from swidden to commercial agriculture. Publicly available global data were integrated into the ABM to model cross-scale effects of economic globalization on local land-use decisions. A suite of statistics was developed to assess the accuracy of model-predicted land-use outcomes relative to observed and random (i.e. null model) landscapes. At four of six sites, where environmental and demographic forces were important constraints on land-use choices, modeled land-use outcomes were more similar to those observed across sites than the null model. At the two sites in which market forces significantly influenced land-use and livelihood decisions, the model was a poorer predictor of land-use outcomes than the null model. Model successes and failures in simulating real-world land-use patterns enabled the testing of hypotheses on land-use decision-making and yielded insights on the importance of missing mechanisms. The virtual laboratory approach provides a practical framework for systematic improvement of both theory and predictive skill in land change science based on a continual process of experimentation and model enhancement. PMID:24489696
Patterns in the English language: phonological networks, percolation and assembly models
NASA Astrophysics Data System (ADS)
Stella, Massimo; Brede, Markus
2015-05-01
In this paper we provide a quantitative framework for the study of phonological networks (PNs) for the English language by carrying out principled comparisons to null models, either based on site percolation, randomization techniques, or network growth models. In contrast to previous work, we mainly focus on null models that reproduce lower order characteristics of the empirical data. We find that artificial networks matching connectivity properties of the English PN are exceedingly rare: this leads to the hypothesis that the word repertoire might have been assembled over time by preferentially introducing new words which are small modifications of old words. Our null models are able to explain the ‘power-law-like’ part of the degree distributions and generally retrieve qualitative features of the PN such as high clustering, high assortativity coefficient and small-world characteristics. However, the detailed comparison to expectations from null models also points out significant differences, suggesting the presence of additional constraints in word assembly. Key constraints we identify are the avoidance of large degrees, the avoidance of triadic closure and the avoidance of large non-percolating clusters.
Comparing species interaction networks along environmental gradients.
Pellissier, Loïc; Albouy, Camille; Bascompte, Jordi; Farwig, Nina; Graham, Catherine; Loreau, Michel; Maglianesi, Maria Alejandra; Melián, Carlos J; Pitteloud, Camille; Roslin, Tomas; Rohr, Rudolf; Saavedra, Serguei; Thuiller, Wilfried; Woodward, Guy; Zimmermann, Niklaus E; Gravel, Dominique
2018-05-01
Knowledge of species composition and their interactions, in the form of interaction networks, is required to understand processes shaping their distribution over time and space. As such, comparing ecological networks along environmental gradients represents a promising new research avenue to understand the organization of life. Variation in the position and intensity of links within networks along environmental gradients may be driven by turnover in species composition, by variation in species abundances and by abiotic influences on species interactions. While investigating changes in species composition has a long tradition, so far only a limited number of studies have examined changes in species interactions between networks, often with differing approaches. Here, we review studies investigating variation in network structures along environmental gradients, highlighting how methodological decisions about standardization can influence their conclusions. Due to their complexity, variation among ecological networks is frequently studied using properties that summarize the distribution or topology of interactions such as number of links, connectance, or modularity. These properties can either be compared directly or using a procedure of standardization. While measures of network structure can be directly related to changes along environmental gradients, standardization is frequently used to facilitate interpretation of variation in network properties by controlling for some co-variables, or via null models. Null models allow comparing the deviation of empirical networks from random expectations and are expected to provide a more mechanistic understanding of the factors shaping ecological networks when they are coupled with functional traits. As an illustration, we compare approaches to quantify the role of trait matching in driving the structure of plant-hummingbird mutualistic networks, i.e. a direct comparison, standardized by null models and hypothesis-based metaweb. Overall, our analysis warns against a comparison of studies that rely on distinct forms of standardization, as they are likely to highlight different signals. Fostering a better understanding of the analytical tools available and the signal they detect will help produce deeper insights into how and why ecological networks vary along environmental gradients. © 2017 Cambridge Philosophical Society.
Compensation for PKMζ in long-term potentiation and spatial long-term memory in mutant mice.
Tsokas, Panayiotis; Hsieh, Changchi; Yao, Yudong; Lesburguères, Edith; Wallace, Emma Jane Claire; Tcherepanov, Andrew; Jothianandan, Desingarao; Hartley, Benjamin Rush; Pan, Ling; Rivard, Bruno; Farese, Robert V; Sajan, Mini P; Bergold, Peter John; Hernández, Alejandro Iván; Cottrell, James E; Shouval, Harel Z; Fenton, André Antonio; Sacktor, Todd Charlton
2016-05-17
PKMζ is a persistently active PKC isoform proposed to maintain late-LTP and long-term memory. But late-LTP and memory are maintained without PKMζ in PKMζ-null mice. Two hypotheses can account for these findings. First, PKMζ is unimportant for LTP or memory. Second, PKMζ is essential for late-LTP and long-term memory in wild-type mice, and PKMζ-null mice recruit compensatory mechanisms. We find that whereas PKMζ persistently increases in LTP maintenance in wild-type mice, PKCι/λ, a gene-product closely related to PKMζ, persistently increases in LTP maintenance in PKMζ-null mice. Using a pharmacogenetic approach, we find PKMζ-antisense in hippocampus blocks late-LTP and spatial long-term memory in wild-type mice, but not in PKMζ-null mice without the target mRNA. Conversely, a PKCι/λ-antagonist disrupts late-LTP and spatial memory in PKMζ-null mice but not in wild-type mice. Thus, whereas PKMζ is essential for wild-type LTP and long-term memory, persistent PKCι/λ activation compensates for PKMζ loss in PKMζ-null mice.
Topological structures in the equities market network
Leibon, Gregory; Pauls, Scott; Rockmore, Daniel; Savell, Robert
2008-01-01
We present a new method for articulating scale-dependent topological descriptions of the network structure inherent in many complex systems. The technique is based on “partition decoupled null models,” a new class of null models that incorporate the interaction of clustered partitions into a random model and generalize the Gaussian ensemble. As an application, we analyze a correlation matrix derived from 4 years of close prices of equities in the New York Stock Exchange (NYSE) and National Association of Securities Dealers Automated Quotation (NASDAQ). In this example, we expose (i) a natural structure composed of 2 interacting partitions of the market that both agrees with and generalizes standard notions of scale (e.g., sector and industry) and (ii) structure in the first partition that is a topological manifestation of a well-known pattern of capital flow called “sector rotation.” Our approach gives rise to a natural form of multiresolution analysis of the underlying time series that naturally decomposes the basic data in terms of the effects of the different scales at which it clusters. We support our conclusions and show the robustness of the technique with a successful analysis on a simulated network with an embedded topological structure. The equities market is a prototypical complex system, and we expect that our approach will be of use in understanding a broad class of complex systems in which correlation structures are resident.
Jurkowska, Halina; Niewiadomski, Julie; Hirschberger, Lawrence L.; Roman, Heather B.; Mazor, Kevin M.; Liu, Xiaojing; Locasale, Jason W.; Park, Eunkyue
2016-01-01
The cysteine dioxygenase (Cdo1)-null and the cysteine sulfinic acid decarboxylase (Csad)-null mouse are not able to synthesize hypotaurine/taurine by the cysteine/cysteine sulfinate pathway and have very low tissue taurine levels. These mice provide excellent models for studying the effects of taurine on biological processes. Using these mouse models, we identified betaine:homocysteine methyltransferase (BHMT) as a protein whose in vivo expression is robustly regulated by taurine. BHMT levels are low in liver of both Cdo1-null and Csad-null mice, but are restored to wild-type levels by dietary taurine supplementation. A lack of BHMT activity was indicated by an increase in the hepatic betaine level. In contrast to observations in liver of Cdo1-null and Csad-null mice, BHMT was not affected by taurine supplementation of primary hepatocytes from these mice. Likewise, CSAD abundance was not affected by taurine supplementation of primary hepatocytes, although it was robustly upregulated in liver of Cdo1-null and Csad-null mice and lowered to wild-type levels by dietary taurine supplementation. The mechanism by which taurine status affects hepatic CSAD and BHMT expression appears to be complex and to require factors outside of hepatocytes. Within the liver, mRNA abundance for both CSAD and BHMT was upregulated in parallel with protein levels, indicating regulation of BHMT and CSAD mRNA synthesis or degradation. PMID:26481005
Diaz, Francisco J.; McDonald, Peter R.; Pinter, Abraham; Chaguturu, Rathnam
2018-01-01
Biomolecular screening research frequently searches for the chemical compounds that are most likely to make a biochemical or cell-based assay system produce a strong continuous response. Several doses are tested with each compound and it is assumed that, if there is a dose-response relationship, the relationship follows a monotonic curve, usually a version of the median-effect equation. However, the null hypothesis of no relationship cannot be statistically tested using this equation. We used a linearized version of this equation to define a measure of pharmacological effect size, and use this measure to rank the investigated compounds in order of their overall capability to produce strong responses. The null hypothesis that none of the examined doses of a particular compound produced a strong response can be tested with this approach. The proposed approach is based on a new statistical model of the important concept of response detection limit, a concept that is usually neglected in the analysis of dose-response data with continuous responses. The methodology is illustrated with data from a study searching for compounds that neutralize the infection by a human immunodeficiency virus of brain glioblastoma cells. PMID:24905187
NASA Astrophysics Data System (ADS)
Wiedermann, Marc; Donges, Jonathan F.; Kurths, Jürgen; Donner, Reik V.
2016-04-01
Networks with nodes embedded in a metric space have gained increasing interest in recent years. The effects of spatial embedding on the networks' structural characteristics, however, are rarely taken into account when studying their macroscopic properties. Here, we propose a hierarchy of null models to generate random surrogates from a given spatially embedded network that can preserve certain global and local statistics associated with the nodes' embedding in a metric space. Comparing the original network's and the resulting surrogates' global characteristics allows one to quantify to what extent these characteristics are already predetermined by the spatial embedding of the nodes and links. We apply our framework to various real-world spatial networks and show that the proposed models capture macroscopic properties of the networks under study much better than standard random network models that do not account for the nodes' spatial embedding. Depending on the actual performance of the proposed null models, the networks are categorized into different classes. Since many real-world complex networks are in fact spatial networks, the proposed approach is relevant for disentangling the underlying complex system structure from spatial embedding of nodes in many fields, ranging from social systems over infrastructure and neurophysiology to climatology.
A Continuous Threshold Expectile Model.
Zhang, Feipeng; Li, Qunhua
2017-12-01
Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .
An Extension of RSS-based Model Comparison Tests for Weighted Least Squares
2012-08-22
use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model
sirt1-null mice develop an autoimmune-like condition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sequeira, Jedon; Boily, Gino; Bazinet, Stephanie
2008-10-01
The sirt1 gene encodes a protein deacetylase with a broad spectrum of reported substrates. Mice carrying null alleles for sirt1 are viable on outbred genetic backgrounds so we have examined them in detail to identify the biological processes that are dependent on SIRT1. Sera from adult sirt1-null mice contain antibodies that react with nuclear antigens and immune complexes become deposited in the livers and kidneys of these animals. Some of the sirt1-null animals develop a disease resembling diabetes insipidus when they approach 2 years of age although the relationship to the autoimmunity remains unclear. We interpret these observations as consistentmore » with a role for SIRT1 in sustaining normal immune function and in this way delaying the onset of autoimmune disease.« less
Responder analysis without dichotomization.
Zhang, Zhiwei; Chu, Jianxiong; Rahardja, Dewi; Zhang, Hui; Tang, Li
2016-01-01
In clinical trials, it is common practice to categorize subjects as responders and non-responders on the basis of one or more clinical measurements under pre-specified rules. Such a responder analysis is often criticized for the loss of information in dichotomizing one or more continuous or ordinal variables. It is worth noting that a responder analysis can be performed without dichotomization, because the proportion of responders for each treatment can be derived from a model for the original clinical variables (used to define a responder) and estimated by substituting maximum likelihood estimators of model parameters. This model-based approach can be considerably more efficient and more effective for dealing with missing data than the usual approach based on dichotomization. For parameter estimation, the model-based approach generally requires correct specification of the model for the original variables. However, under the sharp null hypothesis, the model-based approach remains unbiased for estimating the treatment difference even if the model is misspecified. We elaborate on these points and illustrate them with a series of simulation studies mimicking a study of Parkinson's disease, which involves longitudinal continuous data in the definition of a responder.
NASA Astrophysics Data System (ADS)
Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan
2018-03-01
Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.
The origin of nulls mode changes and timing noise in pulsars
NASA Astrophysics Data System (ADS)
Jones, P. B.
A solvable polar cap model obtained previously has normal states which may be associated with radio emission and null states. The solutions cannot be time-independent; the neutron star surface temperature T and mean surface nuclear charge Z are both functions of time. The normal and null states, and the transitions between them, form closed cycles in the T-Z plane. Normal-null transitions can occur inside a fraction of the area on the neutron star surface intersected by open magnetic flux lines. The fraction increases with pulsar period and becomes unity when the pulsar nears extinction. Frequency noise, mode changes, and pulse nulls have a common explanation in the transitions.
The origin of nulls, mode changes and timing noise in pulsars
NASA Astrophysics Data System (ADS)
Jones, P. B.
1982-09-01
A solvable polar cap model obtained previously has normal states which may be associated with radio emission, and null states. The solutions cannot be time-independent; the neutron star surface temperature T and mean surface nuclear charge Z are both functions of time. The normal and null states and the transitions between them, form closed cycles in the T-Z plane. Normal-null transitions can occur inside a fraction of the area of the neutron star surface intersected by open magnetic flux lines. The fraction increases with pulsar period and becomes unity when the pulsar nears extinction. Frequency noise, mode changes and pulse nulls have a common explanation in the transitions.
Current singularities at quasi-separatrix layers and three-dimensional magnetic nulls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, I. J. D.; Effenberger, Frederic, E-mail: feffen@waikato.ac.nz
2014-11-10
The open problem of how singular current structures form in line-tied, three-dimensional magnetic fields is addressed. A Lagrangian magneto-frictional relaxation method is employed to model the field evolution toward the final near-singular state. Our starting point is an exact force-free solution of the governing magnetohydrodynamic equations that is sufficiently general to allow for topological features like magnetic nulls to be inside or outside the computational domain, depending on a simple set of parameters. Quasi-separatrix layers (QSLs) are present in these structures and, together with the magnetic nulls, they significantly influence the accumulation of current. It is shown that perturbations affectingmore » the lateral boundaries of the configuration lead not only to collapse around the magnetic null but also to significant QSL currents. Our results show that once a magnetic null is present, the developing currents are always attracted to that specific location and show a much stronger scaling with resolution than the currents that form along the QSL. In particular, the null-point scalings can be consistent with models of 'fast' reconnection. The QSL currents also appear to be unbounded but give rise to weaker singularities, independent of the perturbation amplitude.« less
Evaluating probabilistic dengue risk forecasts from a prototype early warning system for Brazil.
Lowe, Rachel; Coelho, Caio As; Barcellos, Christovam; Carvalho, Marilia Sá; Catão, Rafael De Castro; Coelho, Giovanini E; Ramalho, Walter Massa; Bailey, Trevor C; Stephenson, David B; Rodó, Xavier
2016-02-24
Recently, a prototype dengue early warning system was developed to produce probabilistic forecasts of dengue risk three months ahead of the 2014 World Cup in Brazil. Here, we evaluate the categorical dengue forecasts across all microregions in Brazil, using dengue cases reported in June 2014 to validate the model. We also compare the forecast model framework to a null model, based on seasonal averages of previously observed dengue incidence. When considering the ability of the two models to predict high dengue risk across Brazil, the forecast model produced more hits and fewer missed events than the null model, with a hit rate of 57% for the forecast model compared to 33% for the null model. This early warning model framework may be useful to public health services, not only ahead of mass gatherings, but also before the peak dengue season each year, to control potentially explosive dengue epidemics.
Hyperboloidal evolution of test fields in three spatial dimensions
NASA Astrophysics Data System (ADS)
Zenginoǧlu, Anıl; Kidder, Lawrence E.
2010-06-01
We present the numerical implementation of a clean solution to the outer boundary and radiation extraction problems within the 3+1 formalism for hyperbolic partial differential equations on a given background. Our approach is based on compactification at null infinity in hyperboloidal scri fixing coordinates. We report numerical tests for the particular example of a scalar wave equation on Minkowski and Schwarzschild backgrounds. We address issues related to the implementation of the hyperboloidal approach for the Einstein equations, such as nonlinear source functions, matching, and evaluation of formally singular terms at null infinity.
Estimating Demand for Industrial and Commercial Land Use Given Economic Forecasts
Batista e Silva, Filipe; Koomen, Eric; Diogo, Vasco; Lavalle, Carlo
2014-01-01
Current developments in the field of land use modelling point towards greater level of spatial and thematic resolution and the possibility to model large geographical extents. Improvements are taking place as computational capabilities increase and socioeconomic and environmental data are produced with sufficient detail. Integrated approaches to land use modelling rely on the development of interfaces with specialized models from fields like economy, hydrology, and agriculture. Impact assessment of scenarios/policies at various geographical scales can particularly benefit from these advances. A comprehensive land use modelling framework includes necessarily both the estimation of the quantity and the spatial allocation of land uses within a given timeframe. In this paper, we seek to establish straightforward methods to estimate demand for industrial and commercial land uses that can be used in the context of land use modelling, in particular for applications at continental scale, where the unavailability of data is often a major constraint. We propose a set of approaches based on ‘land use intensity’ measures indicating the amount of economic output per existing areal unit of land use. A base model was designed to estimate land demand based on regional-specific land use intensities; in addition, variants accounting for sectoral differences in land use intensity were introduced. A validation was carried out for a set of European countries by estimating land use for 2006 and comparing it to observations. The models’ results were compared with estimations generated using the ‘null model’ (no land use change) and simple trend extrapolations. Results indicate that the proposed approaches clearly outperformed the ‘null model’, but did not consistently outperform the linear extrapolation. An uncertainty analysis further revealed that the models’ performances are particularly sensitive to the quality of the input land use data. In addition, unknown future trends of regional land use intensity widen considerably the uncertainty bands of the predictions. PMID:24647587
Minimum spanning tree analysis of the human connectome.
van Dellen, Edwin; Sommer, Iris E; Bohlken, Marc M; Tewarie, Prejaas; Draaisma, Laurijn; Zalesky, Andrew; Di Biase, Maria; Brown, Jesse A; Douw, Linda; Otte, Willem M; Mandl, René C W; Stam, Cornelis J
2018-06-01
One of the challenges of brain network analysis is to directly compare network organization between subjects, irrespective of the number or strength of connections. In this study, we used minimum spanning tree (MST; a unique, acyclic subnetwork with a fixed number of connections) analysis to characterize the human brain network to create an empirical reference network. Such a reference network could be used as a null model of connections that form the backbone structure of the human brain. We analyzed the MST in three diffusion-weighted imaging datasets of healthy adults. The MST of the group mean connectivity matrix was used as the empirical null-model. The MST of individual subjects matched this reference MST for a mean 58%-88% of connections, depending on the analysis pipeline. Hub nodes in the MST matched with previously reported locations of hub regions, including the so-called rich club nodes (a subset of high-degree, highly interconnected nodes). Although most brain network studies have focused primarily on cortical connections, cortical-subcortical connections were consistently present in the MST across subjects. Brain network efficiency was higher when these connections were included in the analysis, suggesting that these tracts may be utilized as the major neural communication routes. Finally, we confirmed that MST characteristics index the effects of brain aging. We conclude that the MST provides an elegant and straightforward approach to analyze structural brain networks, and to test network topological features of individual subjects in comparison to empirical null models. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Fast randomization of large genomic datasets while preserving alteration counts.
Gobbi, Andrea; Iorio, Francesco; Dawson, Kevin J; Wedge, David C; Tamborero, David; Alexandrov, Ludmil B; Lopez-Bigas, Nuria; Garnett, Mathew J; Jurman, Giuseppe; Saez-Rodriguez, Julio
2014-09-01
Studying combinatorial patterns in cancer genomic datasets has recently emerged as a tool for identifying novel cancer driver networks. Approaches have been devised to quantify, for example, the tendency of a set of genes to be mutated in a 'mutually exclusive' manner. The significance of the proposed metrics is usually evaluated by computing P-values under appropriate null models. To this end, a Monte Carlo method (the switching-algorithm) is used to sample simulated datasets under a null model that preserves patient- and gene-wise mutation rates. In this method, a genomic dataset is represented as a bipartite network, to which Markov chain updates (switching-steps) are applied. These steps modify the network topology, and a minimal number of them must be executed to draw simulated datasets independently under the null model. This number has previously been deducted empirically to be a linear function of the total number of variants, making this process computationally expensive. We present a novel approximate lower bound for the number of switching-steps, derived analytically. Additionally, we have developed the R package BiRewire, including new efficient implementations of the switching-algorithm. We illustrate the performances of BiRewire by applying it to large real cancer genomics datasets. We report vast reductions in time requirement, with respect to existing implementations/bounds and equivalent P-value computations. Thus, we propose BiRewire to study statistical properties in genomic datasets, and other data that can be modeled as bipartite networks. BiRewire is available on BioConductor at http://www.bioconductor.org/packages/2.13/bioc/html/BiRewire.html. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.
Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less
Influence of Choice of Null Network on Small-World Parameters of Structural Correlation Networks
Hosseini, S. M. Hadi; Kesler, Shelli R.
2013-01-01
In recent years, coordinated variations in brain morphology (e.g., volume, thickness) have been employed as a measure of structural association between brain regions to infer large-scale structural correlation networks. Recent evidence suggests that brain networks constructed in this manner are inherently more clustered than random networks of the same size and degree. Thus, null networks constructed by randomizing topology are not a good choice for benchmarking small-world parameters of these networks. In the present report, we investigated the influence of choice of null networks on small-world parameters of gray matter correlation networks in healthy individuals and survivors of acute lymphoblastic leukemia. Three types of null networks were studied: 1) networks constructed by topology randomization (TOP), 2) networks matched to the distributional properties of the observed covariance matrix (HQS), and 3) networks generated from correlation of randomized input data (COR). The results revealed that the choice of null network not only influences the estimated small-world parameters, it also influences the results of between-group differences in small-world parameters. In addition, at higher network densities, the choice of null network influences the direction of group differences in network measures. Our data suggest that the choice of null network is quite crucial for interpretation of group differences in small-world parameters of structural correlation networks. We argue that none of the available null models is perfect for estimation of small-world parameters for correlation networks and the relative strengths and weaknesses of the selected model should be carefully considered with respect to obtained network measures. PMID:23840672
Multi-objective control for cooperative payload transport with rotorcraft UAVs.
Gimenez, Javier; Gandolfo, Daniel C; Salinas, Lucio R; Rosales, Claudio; Carelli, Ricardo
2018-06-01
A novel kinematic formation controller based on null-space theory is proposed to transport a cable-suspended payload with two rotorcraft UAVs considering collision avoidance, wind perturbations, and properly distribution of the load weight. An accurate 6-DoF nonlinear dynamic model of a helicopter and models for flexible cables and payload are included to test the proposal in a realistic scenario. System stability is demonstrated using Lyapunov theory and several simulation results show the good performance of the approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Fan, Chunyu; Tan, Lingzhao; Zhang, Chunyu; Zhao, Xiuhai; von Gadow, Klaus
2017-10-30
One of the core issues of forest community ecology is the exploration of how ecological processes affect community structure. The relative importance of different processes is still under debate. This study addresses four questions: (1) how is the taxonomic structure of a forest community affected by spatial scale? (2) does the taxonomic structure reveal effects of local processes such as environmental filtering, dispersal limitation or interspecific competition at a local scale? (3) does the effect of local processes on the taxonomic structure vary with the spatial scale? (4) does the analysis based on taxonomic structures provide similar insights when compared with the use of phylogenetic information? Based on the data collected in two large forest observational field studies, the taxonomic structures of the plant communities were analyzed at different sampling scales using taxonomic ratios (number of genera/number of species, number of families/number of species), and the relationship between the number of higher taxa and the number of species. Two random null models were used and the "standardized effect size" (SES) of taxonomic ratios was calculated, to assess possible differences between the observed and simulated taxonomic structures, which may be caused by specific ecological processes. We further applied a phylogeny-based method to compare results with those of the taxonomic approach. As expected, the taxonomic ratios decline with increasing grain size. The quantitative relationship between genera/families and species, described by a linearized power function, showed a good fit. With the exception of the family-species relationship in the Jiaohe study area, the exponents of the genus/family-species relationships did not show any scale dependent effects. The taxonomic ratios of the observed communities had significantly lower values than those of the simulated random community under the test of two null models at almost all scales. Null Model 2 which considered the spatial dispersion of species generated a taxonomic structure which proved to be more consistent with that in the observed community. As sampling sizes increased from 20 m × 20 m to 50 m × 50 m, the magnitudes of SESs of taxonomic ratios increased. Based on the phylogenetic analysis, we found that the Jiaohe plot was phylogenetically clustered at almost all scales. We detected significant phylogenetically overdispersion at the 20 m × 20 m and 30 m × 30 m scales in the Liangshui plot. The results suggest that the effect of abiotic filtering is greater than the effects of interspecific competition in shaping the local community at almost all scales. Local processes influence the taxonomic structures, but their combined effects vary with the spatial scale. The taxonomic approach provides similar insights as the phylogenetic approach, especially when we applied a more conservative null model. Analysing taxonomic structure may be a useful tool for communities where well-resolved phylogenetic data are not available.
Compensation for PKMζ in long-term potentiation and spatial long-term memory in mutant mice
Tsokas, Panayiotis; Hsieh, Changchi; Yao, Yudong; Lesburguères, Edith; Wallace, Emma Jane Claire; Tcherepanov, Andrew; Jothianandan, Desingarao; Hartley, Benjamin Rush; Pan, Ling; Rivard, Bruno; Farese, Robert V; Sajan, Mini P; Bergold, Peter John; Hernández, Alejandro Iván; Cottrell, James E; Shouval, Harel Z; Fenton, André Antonio; Sacktor, Todd Charlton
2016-01-01
PKMζ is a persistently active PKC isoform proposed to maintain late-LTP and long-term memory. But late-LTP and memory are maintained without PKMζ in PKMζ-null mice. Two hypotheses can account for these findings. First, PKMζ is unimportant for LTP or memory. Second, PKMζ is essential for late-LTP and long-term memory in wild-type mice, and PKMζ-null mice recruit compensatory mechanisms. We find that whereas PKMζ persistently increases in LTP maintenance in wild-type mice, PKCι/λ, a gene-product closely related to PKMζ, persistently increases in LTP maintenance in PKMζ-null mice. Using a pharmacogenetic approach, we find PKMζ-antisense in hippocampus blocks late-LTP and spatial long-term memory in wild-type mice, but not in PKMζ-null mice without the target mRNA. Conversely, a PKCι/λ-antagonist disrupts late-LTP and spatial memory in PKMζ-null mice but not in wild-type mice. Thus, whereas PKMζ is essential for wild-type LTP and long-term memory, persistent PKCι/λ activation compensates for PKMζ loss in PKMζ-null mice. DOI: http://dx.doi.org/10.7554/eLife.14846.001 PMID:27187150
A general approach for predicting the behavior of the Supreme Court of the United States
Bommarito, Michael J.; Blackman, Josh
2017-01-01
Building on developments in machine learning and prior work in the science of judicial prediction, we construct a model designed to predict the behavior of the Supreme Court of the United States in a generalized, out-of-sample context. To do so, we develop a time-evolving random forest classifier that leverages unique feature engineering to predict more than 240,000 justice votes and 28,000 cases outcomes over nearly two centuries (1816-2015). Using only data available prior to decision, our model outperforms null (baseline) models at both the justice and case level under both parametric and non-parametric tests. Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently, over the past century, we outperform an in-sample optimized null model by nearly 5%. Our performance is consistent with, and improves on the general level of prediction demonstrated by prior work; however, our model is distinctive because it can be applied out-of-sample to the entire past and future of the Court, not a single term. Our results represent an important advance for the science of quantitative legal prediction and portend a range of other potential applications. PMID:28403140
NASA Technical Reports Server (NTRS)
Shao, Michael; Serabyn, Eugene; Levine, Bruce Martin; Beichman, Charles; Liu, Duncan; Martin, Stefan; Orton, Glen; Mennesson, Bertrand; Morgan, Rhonda; Velusamy, Thangasamy;
2003-01-01
This talk describes a new concept for visible direct detection of Earth like extra solar planets using a nulling coronagraph instrument behind a 4m telescope in space. In the baseline design, a 4 beam nulling interferometer is synthesized from the telescope pupil, producing a very deep theta^4null which is then filtered by a coherent array of single mode fibers to suppress the residual scattered light. With perfect optics, the stellar leakage is less than 1e-11 of the starlight at the location of the planet. With diffraction limited telescope optics (lambda/20), suppression of the starlight to 1e-10 is possible. The concept is described along with the key advantages over more traditional approaches such as apodized aperture telescopes and Lyot type coronagraphs.
Viladomat, Júlia; Mazumder, Rahul; McInturff, Alex; McCauley, Douglas J; Hastie, Trevor
2014-06-01
We propose a method to test the correlation of two random fields when they are both spatially autocorrelated. In this scenario, the assumption of independence for the pair of observations in the standard test does not hold, and as a result we reject in many cases where there is no effect (the precision of the null distribution is overestimated). Our method recovers the null distribution taking into account the autocorrelation. It uses Monte-Carlo methods, and focuses on permuting, and then smoothing and scaling one of the variables to destroy the correlation with the other, while maintaining at the same time the initial autocorrelation. With this simulation model, any test based on the independence of two (or more) random fields can be constructed. This research was motivated by a project in biodiversity and conservation in the Biology Department at Stanford University. © 2014, The International Biometric Society.
Model-based phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth S.; Paradiso, Joseph; Bergmann, Edward V.; Rowell, Derek
1990-01-01
Two steering laws are presented for single-gimbal control moment gyroscopes. An approach using the Moore-Penrose pseudoinverse with a nondirectional null-motion algorithm is shown by example to avoid internal singularities for unidirectional torque commands, for which existing algorithms fail. Because this is still a tangent-based approach, however, singularity avoidance cannot be guaranteed. The singularity robust inverse is introduced as an alternative to the pseudoinverse for computing torque-producing gimbal rates near singular states. This approach, coupled with the nondirectional null algorithm, is shown by example to provide better steering law performance by allowing torque errors to be produced in the vicinity of singular states.
NASA Technical Reports Server (NTRS)
Goepfert, T. M.; McCarthy, M.; Kittrell, F. S.; Stephens, C.; Ullrich, R. L.; Brinkley, B. R.; Medina, D.
2000-01-01
Mammary epithelial cells from p53 null mice have been shown recently to exhibit an increased risk for tumor development. Hormonal stimulation markedly increased tumor development in p53 null mammary cells. Here we demonstrate that mammary tumors arising in p53 null mammary cells are highly aneuploid, with greater than 70% of the tumor cells containing altered chromosome number and a mean chromosome number of 56. Normal mammary cells of p53 null genotype and aged less than 14 wk do not exhibit aneuploidy in primary cell culture. Significantly, the hormone progesterone, but not estrogen, increases the incidence of aneuploidy in morphologically normal p53 null mammary epithelial cells. Such cells exhibited 40% aneuploidy and a mean chromosome number of 54. The increase in aneuploidy measured in p53 null tumor cells or hormonally stimulated normal p53 null cells was not accompanied by centrosome amplification. These results suggest that normal levels of progesterone can facilitate chromosomal instability in the absence of the tumor suppressor gene, p53. The results support the emerging hypothesis based both on human epidemiological and animal model studies that progesterone markedly enhances mammary tumorigenesis.
A model for the characterization of the spatial properties in vestibular neurons
NASA Technical Reports Server (NTRS)
Angelaki, D. E.; Bush, G. A.; Perachio, A. A.
1992-01-01
Quantitative study of the static and dynamic response properties of some otolith-sensitive neurons has been difficult in the past partly because their responses to different linear acceleration vectors exhibited no "null" plane and a dependence of phase on stimulus orientation. The theoretical formulation of the response ellipse provides a quantitative way to estimate the spatio-temporal properties of such neurons. Its semi-major axis gives the direction of the polarization vector (i.e., direction of maximal sensitivity) and it estimates the neuronal response for stimulation along that direction. In addition, the semi-minor axis of the ellipse provides an estimate of the neuron's maximal sensitivity in the "null" plane. In this paper, extracellular recordings from otolith-sensitive vestibular nuclei neurons in decerebrate rats were used to demonstrate the practical application of the method. The experimentally observed gain and phase dependence on the orientation angle of the acceleration vector in a head-horizontal plane was described and satisfactorily fit by the response ellipse model. In addition, the model satisfactorily fits neuronal responses in three-dimensions and unequivocally demonstrates that the response ellipse formulation is the general approach to describe quantitatively the spatial properties of vestibular neurons.
ERIC Educational Resources Information Center
Marmolejo-Ramos, Fernando; Cousineau, Denis
2017-01-01
The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…
Unconscious perception: a model-based approach to method and evidence.
Snodgrass, Michael; Bernat, Edward; Shevrin, Howard
2004-07-01
Unconscious perceptual effects remain controversial because it is hard to rule out alternative conscious perception explanations for them. We present a novel methodological framework, stressing the centrality of specifying the single-process conscious perception model (i.e., the null hypothesis). Various considerations, including those of SDT (Macmillan & Creelman, 1991), suggest that conscious perception functions hierarchically, in such a way that higher level effects (e.g., semantic priming) should not be possible without lower level discrimination (i.e., detection and identification). Relatedly, alternative conscious perception accounts (as well as the exhaustiveness, null sensitivity, and exclusiveness problems-Reingold & Merikle, 1988, 1990) predict positive relationships between direct and indirect measures. Contrariwise, our review suggests that negative and/or nonmonotonic relationships are found, providing strong evidence for unconscious perception and further suggesting that conscious and unconscious perceptual influences are functionally exclusive (cf. Jones, 1987), in such a way that the former typically override the latter when both are present. Consequently, unconscious perceptual effects manifest reliably only when conscious perception is completely absent, which occurs at the objective detection (but not identification) threshold.
Dinucleotide controlled null models for comparative RNA gene prediction.
Gesell, Tanja; Washietl, Stefan
2008-05-27
Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz is available as open source C code that can be compiled for every major platform and downloaded here: http://sourceforge.net/projects/sissiz.
Evaluating probabilistic dengue risk forecasts from a prototype early warning system for Brazil
Lowe, Rachel; Coelho, Caio AS; Barcellos, Christovam; Carvalho, Marilia Sá; Catão, Rafael De Castro; Coelho, Giovanini E; Ramalho, Walter Massa; Bailey, Trevor C; Stephenson, David B; Rodó, Xavier
2016-01-01
Recently, a prototype dengue early warning system was developed to produce probabilistic forecasts of dengue risk three months ahead of the 2014 World Cup in Brazil. Here, we evaluate the categorical dengue forecasts across all microregions in Brazil, using dengue cases reported in June 2014 to validate the model. We also compare the forecast model framework to a null model, based on seasonal averages of previously observed dengue incidence. When considering the ability of the two models to predict high dengue risk across Brazil, the forecast model produced more hits and fewer missed events than the null model, with a hit rate of 57% for the forecast model compared to 33% for the null model. This early warning model framework may be useful to public health services, not only ahead of mass gatherings, but also before the peak dengue season each year, to control potentially explosive dengue epidemics. DOI: http://dx.doi.org/10.7554/eLife.11285.001 PMID:26910315
Variable selection with stepwise and best subset approaches
2016-01-01
While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786
ON THE NATURE OF RECONNECTION AT A SOLAR CORONAL NULL POINT ABOVE A SEPARATRIX DOME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pontin, D. I.; Priest, E. R.; Galsgaard, K., E-mail: dpontin@maths.dundee.ac.uk
2013-09-10
Three-dimensional magnetic null points are ubiquitous in the solar corona and in any generic mixed-polarity magnetic field. We consider magnetic reconnection at an isolated coronal null point whose fan field lines form a dome structure. Using analytical and computational models, we demonstrate several features of spine-fan reconnection at such a null, including the fact that substantial magnetic flux transfer from one region of field line connectivity to another can occur. The flux transfer occurs across the current sheet that forms around the null point during spine-fan reconnection, and there is no separator present. Also, flipping of magnetic field lines takesmore » place in a manner similar to that observed in the quasi-separatrix layer or slip-running reconnection.« less
Gordon, J.A.; Freedman, B.R.; Zuskov, A.; Iozzo, R.V.; Birk, D.E.; Soslowsky, L.J.
2015-01-01
Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn−/−) and biglycan-null (Bgn−/−) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. PMID:25888014
Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J
2015-07-16
Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.
On Nulling, Drifting, and Their Interactions in PSRs J1741-0840 and J1840-0840
NASA Astrophysics Data System (ADS)
Gajjar, V.; Yuan, J. P.; Yuen, R.; Wen, Z. G.; Liu, Z. Y.; Wang, N.
2017-12-01
We report detailed investigation of nulling and drifting behavior of two pulsars PSRs J1741-0840 and J1840-0840 observed from the Giant Meterwave Radio Telescope at 625 MHz. PSR J1741-0840 was found to show a nulling fraction (NF) of around 30% ± 5% while PSR J1840-0840 was shown to have an NF of around 50% ± 6%. We measured drifting behavior from different profile components in PSR J1840-0840 for the first time with the leading component showing drifting with 13.5 ± 0.7 periods while the weak trailing component showed drifting of around 18 ± 1 periods. Large nulling hampers accuracy of these quantities derived using standard Fourier techniques. A more accurate comparison was drawn from driftband slopes, measured after sub-pulse modeling. These measurements revealed interesting sporadic and irregular drifting behavior in both pulsars. We conclude that the previously reported different drifting periodicities in the trailing component of PSR J1741-0840 is likely due to the spread in these driftband slopes. We also find that both components of PSR J1840-0840 show similar driftband slopes within the uncertainties. Unique nulling-drifting interaction is identified in PSR J1840-0840 where, on most occasions, the pulsar tends to start nulling after what appears to be the end of a driftband. Similarly, when the pulsar switches back to an emission phase, on most occasions it starts at the beginning of a new driftband in both components. Such behaviors have not been detected in any other pulsars to our knowledge. We also found that PSR J1741-0840 seems to have no memory of its previous burst phase while PSR J1840-0840 clearly exhibits memory of its previous state even after longer nulls for both components. We discuss possible explanations for these intriguing nulling-drifting interactions seen in both pulsars based on various pulsar nulling models.
Testing the null hypothesis: the forgotten legacy of Karl Popper?
Wilkinson, Mick
2013-01-01
Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new facts on the basis of testing the experimental or research hypothesis makes use of inductive reasoning and is prone to the problem of the Uniformity of Nature assumption described by David Hume in the eighteenth century. Despite this issue and the well documented solution provided by Popper's falsification theory, the majority of publications are still written such that they suggest the research hypothesis is being tested. This is contrary to accepted scientific convention and possibly highlights a poor understanding of the application of conventional significance-based data analysis approaches. Our work should remain driven by conjecture and attempted falsification such that it is always the null hypothesis that is tested. The write up of our studies should make it clear that we are indeed testing the null hypothesis and conforming to the established and accepted philosophical conventions of the scientific method.
Testing goodness of fit in regression: a general approach for specified alternatives.
Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J
2012-12-10
When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test. Copyright © 2012 John Wiley & Sons, Ltd.
Martínez, Carlos Alberto; Khare, Kshitij; Banerjee, Arunava; Elzo, Mauricio A
2017-03-21
This study corresponds to the second part of a companion paper devoted to the development of Bayesian multiple regression models accounting for randomness of genotypes in across population genome-wide prediction. This family of models considers heterogeneous and correlated marker effects and allelic frequencies across populations, and has the ability of considering records from non-genotyped individuals and individuals with missing genotypes in any subset of loci without the need for previous imputation, taking into account uncertainty about imputed genotypes. This paper extends this family of models by considering multivariate spike and slab conditional priors for marker allele substitution effects and contains derivations of approximate Bayes factors and fractional Bayes factors to compare models from part I and those developed here with their null versions. These null versions correspond to simpler models ignoring heterogeneity of populations, but still accounting for randomness of genotypes. For each marker loci, the spike component of priors corresponded to point mass at 0 in R S , where S is the number of populations, and the slab component was a S-variate Gaussian distribution, independent conditional priors were assumed. For the Gaussian components, covariance matrices were assumed to be either the same for all markers or different for each marker. For null models, the priors were simply univariate versions of these finite mixture distributions. Approximate algebraic expressions for Bayes factors and fractional Bayes factors were found using the Laplace approximation. Using the simulated datasets described in part I, these models were implemented and compared with models derived in part I using measures of predictive performance based on squared Pearson correlations, Deviance Information Criterion, Bayes factors, and fractional Bayes factors. The extensions presented here enlarge our family of genome-wide prediction models making it more flexible in the sense that it now offers more modeling options. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Yong Q; Friedman, David B; Wang, Zhe; Woodruff, Elvin; Pan, Luyuan; O'donnell, Janis; Broadie, Kendal
2005-03-01
Fragile X syndrome is the most common form of inherited mental retardation, associated with both cognitive and behavioral anomalies. The disease is caused by silencing of the fragile X mental retardation 1 (fmr1) gene, which encodes the mRNA-binding, translational regulator FMRP. Previously we established a disease model through mutation of Drosophila fmr1 (dfmr1) and showed that loss of dFMRP causes defects in neuronal structure, function, and behavioral output similar to the human disease state. To uncover molecular targets of dFMRP in the brain, we use here a proteomic approach involving two-dimensional difference gel electrophoresis analyses followed by mass spectrometry identification of proteins with significantly altered expression in dfmr1 null mutants. We then focus on two misregulated enzymes, phenylalanine hydroxylase (Henna) and GTP cyclohydrolase (Punch), both of which mediate in concert the synthetic pathways of two key monoamine neuromodulators, dopamine and serotonin. Brain enzymatic assays show a nearly 2-fold elevation of Punch activity in dfmr1 null mutants. Consistently brain neurochemical assays show that both dopamine and serotonin are significantly increased in dfmr1 null mutants. At a cellular level, dfmr1 null mutant neurons display a highly significant elevation of the dense core vesicles that package these monoamine neuromodulators for secretion. Taken together, these data indicate that dFMRP normally down-regulates the monoamine pathway, which is consequently up-regulated in the mutant condition. Elevated brain levels of dopamine and serotonin provide a plausible mechanistic explanation for aspects of cognitive and behavioral deficits in human patients.
Dai, Mei; Liou, Benjamin; Swope, Brittany; Wang, Xiaohong; Zhang, Wujuan; Inskeep, Venette; Grabowski, Gregory A; Sun, Ying; Pan, Dao
2016-01-01
To study the neuronal deficits in neuronopathic Gaucher Disease (nGD), the chronological behavioral profiles and the age of onset of brain abnormalities were characterized in a chronic nGD mouse model (9V/null). Progressive accumulation of glucosylceramide (GC) and glucosylsphingosine (GS) in the brain of 9V/null mice were observed at as early as 6 and 3 months of age for GC and GS, respectively. Abnormal accumulation of α-synuclein was present in the 9V/null brain as detected by immunofluorescence and Western blot analysis. In a repeated open-field test, the 9V/null mice (9 months and older) displayed significantly less environmental habituation and spent more time exploring the open-field than age-matched WT group, indicating the onset of short-term spatial memory deficits. In the marble burying test, the 9V/null group had a shorter latency to initiate burying activity at 3 months of age, whereas the latency increased significantly at ≥12 months of age; 9V/null females buried significantly more marbles to completion than the WT group, suggesting an abnormal response to the instinctive behavior and an abnormal activity in non-associative anxiety-like behavior. In the conditional fear test, only the 9V/null males exhibited a significant decrease in response to contextual fear, but both genders showed less response to auditory-cued fear compared to age- and gender-matched WT at 12 months of age. These results indicate hippocampus-related emotional memory defects. Abnormal gait emerged in 9V/null mice with wider front-paw and hind-paw widths, as well as longer stride in a gender-dependent manner with different ages of onset. Significantly higher liver- and spleen-to-body weight ratios were detected in 9V/null mice with different ages of onsets. These data provide temporal evaluation of neurobehavioral dysfunctions and brain pathology in 9V/null mice that can be used for experimental designs to evaluate novel therapies for nGD.
Dai, Mei; Liou, Benjamin; Swope, Brittany; Wang, Xiaohong; Zhang, Wujuan; Inskeep, Venette; Grabowski, Gregory A.; Sun, Ying; Pan, Dao
2016-01-01
To study the neuronal deficits in neuronopathic Gaucher Disease (nGD), the chronological behavioral profiles and the age of onset of brain abnormalities were characterized in a chronic nGD mouse model (9V/null). Progressive accumulation of glucosylceramide (GC) and glucosylsphingosine (GS) in the brain of 9V/null mice were observed at as early as 6 and 3 months of age for GC and GS, respectively. Abnormal accumulation of α-synuclein was present in the 9V/null brain as detected by immunofluorescence and Western blot analysis. In a repeated open-field test, the 9V/null mice (9 months and older) displayed significantly less environmental habituation and spent more time exploring the open-field than age-matched WT group, indicating the onset of short-term spatial memory deficits. In the marble burying test, the 9V/null group had a shorter latency to initiate burying activity at 3 months of age, whereas the latency increased significantly at ≥12 months of age; 9V/null females buried significantly more marbles to completion than the WT group, suggesting an abnormal response to the instinctive behavior and an abnormal activity in non-associative anxiety-like behavior. In the conditional fear test, only the 9V/null males exhibited a significant decrease in response to contextual fear, but both genders showed less response to auditory-cued fear compared to age- and gender-matched WT at 12 months of age. These results indicate hippocampus-related emotional memory defects. Abnormal gait emerged in 9V/null mice with wider front-paw and hind-paw widths, as well as longer stride in a gender-dependent manner with different ages of onset. Significantly higher liver- and spleen-to-body weight ratios were detected in 9V/null mice with different ages of onsets. These data provide temporal evaluation of neurobehavioral dysfunctions and brain pathology in 9V/null mice that can be used for experimental designs to evaluate novel therapies for nGD. PMID:27598339
Gauran, Iris Ivy M; Park, Junyong; Lim, Johan; Park, DoHwan; Zylstra, John; Peterson, Thomas; Kann, Maricel; Spouge, John L
2017-09-22
In recent mutation studies, analyses based on protein domain positions are gaining popularity over gene-centric approaches since the latter have limitations in considering the functional context that the position of the mutation provides. This presents a large-scale simultaneous inference problem, with hundreds of hypothesis tests to consider at the same time. This article aims to select significant mutation counts while controlling a given level of Type I error via False Discovery Rate (FDR) procedures. One main assumption is that the mutation counts follow a zero-inflated model in order to account for the true zeros in the count model and the excess zeros. The class of models considered is the Zero-inflated Generalized Poisson (ZIGP) distribution. Furthermore, we assumed that there exists a cut-off value such that smaller counts than this value are generated from the null distribution. We present several data-dependent methods to determine the cut-off value. We also consider a two-stage procedure based on screening process so that the number of mutations exceeding a certain value should be considered as significant mutations. Simulated and protein domain data sets are used to illustrate this procedure in estimation of the empirical null using a mixture of discrete distributions. Overall, while maintaining control of the FDR, the proposed two-stage testing procedure has superior empirical power. 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Evaluation of null-point detection methods on simulation data
NASA Astrophysics Data System (ADS)
Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.
Shocks and currents in stratified atmospheres with a magnetic null point
NASA Astrophysics Data System (ADS)
Tarr, Lucas A.; Linton, Mark
2017-08-01
We use the resistive MHD code LARE (Arber et al 2001) to inject a compressive MHD wavepacket into a stratified atmosphere that has a single magnetic null point, as recently described in Tarr et al 2017. The 2.5D simulation represents a slice through a small ephemeral region or area of plage. The strong gradients in field strength and connectivity related to the presence of the null produce substantially different dynamics compared to the more slowly varying fields typically used in simple sunspot models. The wave-null interaction produces a fast mode shock that collapses the null into a current sheet and generates a set of outward propagating (from the null) slow mode shocks confined to field lines near each separatrix. A combination of oscillatory reconnection and shock dissipation ultimately raise the plasma's internal energy at the null and along each separatrix by 25-50% above the background. The resulting pressure gradients must be balanced by Lorentz forces, so that the final state has contact discontinuities along each separatrix and a persistent current at the null. The simulation demonstrates that fast and slow mode waves localize currents to the topologically important locations of the field, just as their Alfvenic counterparts do, and also illustrates the necessity of treating waves and reconnection as coupled phenomena.
NASA Astrophysics Data System (ADS)
Straka, Mika J.; Caldarelli, Guido; Squartini, Tiziano; Saracco, Fabio
2018-04-01
Bipartite networks provide an insightful representation of many systems, ranging from mutualistic networks of species interactions to investment networks in finance. The analyses of their topological structures have revealed the ubiquitous presence of properties which seem to characterize many—apparently different—systems. Nestedness, for example, has been observed in biological plant-pollinator as well as in country-product exportation networks. Due to the interdisciplinary character of complex networks, tools developed in one field, for example ecology, can greatly enrich other areas of research, such as economy and finance, and vice versa. With this in mind, we briefly review several entropy-based bipartite null models that have been recently proposed and discuss their application to real-world systems. The focus on these models is motivated by the fact that they show three very desirable features: analytical character, general applicability, and versatility. In this respect, entropy-based methods have been proven to perform satisfactorily both in providing benchmarks for testing evidence-based null hypotheses and in reconstructing unknown network configurations from partial information. Furthermore, entropy-based models have been successfully employed to analyze ecological as well as economic systems. As an example, the application of entropy-based null models has detected early-warning signals, both in economic and financial systems, of the 2007-2008 world crisis. Moreover, they have revealed a statistically-significant export specialization phenomenon of country export baskets in international trade, a result that seems to reconcile Ricardo's hypothesis in classical economics with recent findings on the (empirical) diversification industrial production at the national level. Finally, these null models have shown that the information contained in the nestedness is already accounted for by the degree sequence of the corresponding graphs.
Non-null annular subaperture stitching interferometry for aspheric test
NASA Astrophysics Data System (ADS)
Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A non-null annular subaperture stitching interferometry (NASSI), combining the subaperture stitching idea and non-null test method, is proposed for steep aspheric testing. Compared with standard annular subaperture stitching interferometry (ASSI), a partial null lens (PNL) is employed as an alternative to the transmission sphere, to generate different aspherical wavefronts as the references. The coverage subaperture number would thus be reduced greatly for the better performance of aspherical wavefronts in matching the local slope of aspheric surfaces. Instead of various mathematical stitching algorithms, a simultaneous reverse optimizing reconstruction (SROR) method based on system modeling and ray tracing is proposed for full aperture figure error reconstruction. All the subaperture measurements are simulated simultaneously with a multi-configuration model in a ray-tracing program, including the interferometric system modeling and subaperture misalignments modeling. With the multi-configuration model, full aperture figure error would be extracted in form of Zernike polynomials from subapertures wavefront data by the SROR method. This method concurrently accomplishes subaperture retrace error and misalignment correction, requiring neither complex mathematical algorithms nor subaperture overlaps. A numerical simulation exhibits the comparison of the performance of the NASSI and standard ASSI, which demonstrates the high accuracy of the NASSI in testing steep aspheric. Experimental results of NASSI are shown to be in good agreement with that of Zygo® VerifireTM Asphere interferometer.
Capturing the Flatness of a peer-to-peer lending network through random and selected perturbations
NASA Astrophysics Data System (ADS)
Karampourniotis, Panagiotis D.; Singh, Pramesh; Uparna, Jayaram; Horvat, Emoke-Agnes; Szymanski, Boleslaw K.; Korniss, Gyorgy; Bakdash, Jonathan Z.; Uzzi, Brian
Null models are established tools that have been used in network analysis to uncover various structural patterns. They quantify the deviance of an observed network measure to that given by the null model. We construct a null model for weighted, directed networks to identify biased links (carrying significantly different weights than expected according to the null model) and thus quantify the flatness of the system. Using this model, we study the flatness of Kiva, a large international crownfinancing network of borrowers and lenders, aggregated to the country level. The dataset spans the years from 2006 to 2013. Our longitudinal analysis shows that flatness of the system is reducing over time, meaning the proportion of biased inter-country links is growing. We extend our analysis by testing the robustness of the flatness of the network in perturbations on the links' weights or the nodes themselves. Examples of such perturbations are event shocks (e.g. erecting walls) or regulatory shocks (e.g. Brexit). We find that flatness is unaffected by random shocks, but changes after shocks target links with a large weight or bias. The methods we use to capture the flatness are based on analytics, simulations, and numerical computations using Shannon's maximum entropy. Supported by ARL NS-CTA.
On the Directional Dependence and Null Space Freedom in Uncertainty Bound Identification
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
1997-01-01
In previous work, the determination of uncertainty models via minimum norm model validation is based on a single set of input and output measurement data. Since uncertainty bounds at each frequency is directionally dependent for multivariable systems, this will lead to optimistic uncertainty levels. In addition, the design freedom in the uncertainty model has not been utilized to further reduce uncertainty levels. The above issues are addressed by formulating a min- max problem. An analytical solution to the min-max problem is given to within a generalized eigenvalue problem, thus avoiding a direct numerical approach. This result will lead to less conservative and more realistic uncertainty models for use in robust control.
Giovannelli, Gaia; Giacomazzi, Giorgia; Grosemans, Hanne; Sampaolesi, Maurilio
2018-02-24
Limb-girdle muscular dystrophy type 2E (LGMD2E) is caused by mutations in the β-sarcoglycan gene, which is expressed in skeletal, cardiac, and smooth muscles. β-Sarcoglycan-deficient (Sgcb-null) mice develop severe muscular dystrophy and cardiomyopathy with focal areas of necrosis. In this study we performed morphological (histological and cellular characterization) and functional (isometric tetanic force and fatigue) analyses in dystrophic mice. Comparison studies were carried out in 1-month-old (clinical onset of the disease) and 7-month-old control mice (C57Bl/6J, Rag2/γc-null) and immunocompetent and immunodeficient dystrophic mice (Sgcb-null and Sgcb/Rag2/γc-null, respectively). We found that the lack of an immunological system resulted in an increase of calcification in striated muscles without impairing extensor digitorum longus muscle performance. Sgcb/Rag2/γc-null muscles showed a significant reduction of alkaline phosphate-positive mesoangioblasts. The immunological system counteracts skeletal muscle degeneration in the murine model of LGMD2E. Muscle Nerve, 2018. © 2018 The Authors. Muscle & Nerve Published by Wiley Periodicals, Inc.
Partial null astigmatism-compensated interferometry for a concave freeform Zernike mirror
NASA Astrophysics Data System (ADS)
Dou, Yimeng; Yuan, Qun; Gao, Zhishan; Yin, Huimin; Chen, Lu; Yao, Yanxia; Cheng, Jinlong
2018-06-01
Partial null interferometry without using any null optics is proposed to measure a concave freeform Zernike mirror. Oblique incidence on the freeform mirror is used to compensate for astigmatism as the main component in its figure, and to constrain the divergence of the test beam as well. The phase demodulated from the partial nulled interferograms is divided into low-frequency phase and high-frequency phase by Zernike polynomial fitting. The low-frequency surface figure error of the freeform mirror represented by the coefficients of Zernike polynomials is reconstructed from the low-frequency phase, applying the reverse optimization reconstruction technology in the accurate model of the interferometric system. The high-frequency surface figure error of the freeform mirror is retrieved from the high-frequency phase adopting back propagating technology, according to the updated model in which the low-frequency surface figure error has been superimposed on the sag of the freeform mirror. Simulations verified that this method is capable of testing a wide variety of astigmatism-dominated freeform mirrors due to the high dynamic range. The experimental result using our proposed method for a concave freeform Zernike mirror is consistent with the null test result employing the computer-generated hologram.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin J; Scholten, Travis L.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
NASA Astrophysics Data System (ADS)
Hughes, J. D.; White, J.; Doherty, J.
2011-12-01
Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Tsuchiya, Hiroyuki; da Costa, Kerry-Ann; Lee, Sangmin; Renga, Barbara; Jaeschke, Hartmut; Yang, Zhihong; Orena, Stephen J; Goedken, Michael J; Zhang, Yuxia; Kong, Bo; Lebofsky, Margitta; Rudraiah, Swetha; Smalling, Rana; Guo, Grace; Fiorucci, Stefano; Zeisel, Steven H; Wang, Li
2015-05-01
Hyperhomocysteinemia is often associated with liver and metabolic diseases. We studied nuclear receptors that mediate oscillatory control of homocysteine homeostasis in mice. We studied mice with disruptions in Nr0b2 (called small heterodimer partner [SHP]-null mice), betaine-homocysteine S-methyltransferase (Bhmt), or both genes (BHMT-null/SHP-null mice), along with mice with wild-type copies of these genes (controls). Hyperhomocysteinemia was induced by feeding mice alcohol (National Institute on Alcohol Abuse and Alcoholism binge model) or chow diets along with water containing 0.18% DL-homocysteine. Some mice were placed on diets containing cholic acid (1%) or cholestyramine (2%) or high-fat diets (60%). Serum and livers were collected during a 24-hour light-dark cycle and analyzed by RNA-seq, metabolomic, and quantitative polymerase chain reaction, immunoblot, and chromatin immunoprecipitation assays. SHP-null mice had altered timing in expression of genes that regulate homocysteine metabolism compared with control mice. Oscillatory production of S-adenosylmethionine, betaine, choline, phosphocholine, glyceophosphocholine, cystathionine, cysteine, hydrogen sulfide, glutathione disulfide, and glutathione, differed between SHP-null mice and control mice. SHP inhibited transcriptional activation of Bhmt and cystathionine γ-lyase by FOXA1. Expression of Bhmt and cystathionine γ-lyase was decreased when mice were fed cholic acid but increased when they were placed on diets containing cholestyramine or high-fat content. Diets containing ethanol or homocysteine induced hyperhomocysteinemia and glucose intolerance in control, but not SHP-null, mice. In BHMT-null and BHMT-null/SHP-null mice fed a control liquid, lipid vacuoles were observed in livers. Ethanol feeding induced accumulation of macrovesicular lipid vacuoles to the greatest extent in BHMT-null and BHMT-null/SHP-null mice. Disruption of Shp in mice alters timing of expression of genes that regulate homocysteine metabolism and the liver responses to ethanol and homocysteine. SHP inhibits the transcriptional activation of Bhmt and cystathionine γ-lyase by FOXA1. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
Measurement of steep aspheric surfaces using improved two-wavelength phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Zhang, Liqiong; Wang, Shaopu; Hu, Yao; Hao, Qun
2017-10-01
Optical components with aspheric surfaces can improve the imaging quality of optical systems, and also provide extra advantages such as lighter weight, smaller volume and simper structure. In order to satisfy these performance requirements, the surface error of aspheric surfaces, especially high departure aspheric surfaces must be measured accurately and conveniently. The major obstacle of traditional null-interferometry for aspheric surface under test is that specific and complex null optics need to be designed to fully compensate for the normal aberration of the aspheric surface under test. However, non-null interferometry partially compensating for the aspheric normal aberration can test aspheric surfaces without specific null optics. In this work, a novel non-null test approach of measuring the deviation between aspheric surfaces and the best reference sphere by using improved two-wavelength phase shifting interferometer is described. With the help of the calibration based on reverse iteration optimization, we can effectively remove the retrace error and thus improve the accuracy. Simulation results demonstrate that this method can measure the aspheric surface with the departure of over tens of microns from the best reference sphere, which introduces approximately 500λ of wavefront aberration at the detector.
Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics
NASA Astrophysics Data System (ADS)
Kordy, Michal Adam
The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.
Harrington, S; Reeder, T W
2017-02-01
The binary-state speciation and extinction (BiSSE) model has been used in many instances to identify state-dependent diversification and reconstruct ancestral states. However, recent studies have shown that the standard procedure of comparing the fit of the BiSSE model to constant-rate birth-death models often inappropriately favours the BiSSE model when diversification rates vary in a state-independent fashion. The newly developed HiSSE model enables researchers to identify state-dependent diversification rates while accounting for state-independent diversification at the same time. The HiSSE model also allows researchers to test state-dependent models against appropriate state-independent null models that have the same number of parameters as the state-dependent models being tested. We reanalyse two data sets that originally used BiSSE to reconstruct ancestral states within squamate reptiles and reached surprising conclusions regarding the evolution of toepads within Gekkota and viviparity across Squamata. We used this new method to demonstrate that there are many shifts in diversification rates across squamates. We then fit various HiSSE submodels and null models to the state and phylogenetic data and reconstructed states under these models. We found that there is no single, consistent signal for state-dependent diversification associated with toepads in gekkotans or viviparity across all squamates. Our reconstructions show limited support for the recently proposed hypotheses that toepads evolved multiple times independently in Gekkota and that transitions from viviparity to oviparity are common in Squamata. Our results highlight the importance of considering an adequate pool of models and null models when estimating diversification rate parameters and reconstructing ancestral states. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.
Magnetoacoustic Waves in a Stratified Atmosphere with a Magnetic Null Point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarr, Lucas A.; Linton, Mark; Leake, James, E-mail: lucas.tarr.ctr@nrl.navy.mil
2017-03-01
We perform nonlinear MHD simulations to study the propagation of magnetoacoustic waves from the photosphere to the low corona. We focus on a 2D system with a gravitationally stratified atmosphere and three photospheric concentrations of magnetic flux that produce a magnetic null point with a magnetic dome topology. We find that a single wavepacket introduced at the lower boundary splits into multiple secondary wavepackets. A portion of the packet refracts toward the null owing to the varying Alfvén speed. Waves incident on the equipartition contour surrounding the null, where the sound and Alfvén speeds coincide, partially transmit, reflect, and mode-convertmore » between branches of the local dispersion relation. Approximately 15.5% of the wavepacket’s initial energy ( E {sub input}) converges on the null, mostly as a fast magnetoacoustic wave. Conversion is very efficient: 70% of the energy incident on the null is converted to slow modes propagating away from the null, 7% leaves as a fast wave, and the remaining 23% (0.036 E {sub input}) is locally dissipated. The acoustic energy leaving the null is strongly concentrated along field lines near each of the null’s four separatrices. The portion of the wavepacket that refracts toward the null, and the amount of current accumulation, depends on the vertical and horizontal wavenumbers and the centroid position of the wavepacket as it crosses the photosphere. Regions that refract toward or away from the null do not simply coincide with regions of open versus closed magnetic field or regions of particular field orientation. We also model wavepacket propagation using a WKB method and find that it agrees qualitatively, though not quantitatively, with the results of the numerical simulation.« less
High frequency generation in the corona: Resonant cavities
NASA Astrophysics Data System (ADS)
Santamaria, I. C.; Van Doorsselaere, T.
2018-03-01
Aims: Null points are prominent magnetic field singularities in which the magnetic field strength strongly decreases in very small spatial scales. Around null points, predicted to be ubiquitous in the solar chromosphere and corona, the wave behavior changes considerably. Null points are also responsible for driving very energetic phenomena, and for contributing to chromospheric and coronal heating. In previous works we demonstrated that slow magneto-acoustic shock waves were generated in the chromosphere propagate through the null point, thereby producing a train of secondary shocks escaping along the field lines. A particular combination of the shock wave speeds generates waves at a frequency of 80 MHz. The present work aims to investigate this high frequency region around a coronal null point to give a plausible explanation to its generation at that particular frequency. Methods: We carried out a set of two-dimensional numerical simulations of wave propagation in the neighborhood of a null point located in the corona. We varied both the amplitude of the driver and the atmospheric properties to investigate the sensitivity of the high frequency waves to these parameters. Results: We demonstrate that the wave frequency is sensitive to the atmospheric parameters in the corona, but it is independent of the strength of the driver. Thus, the null point behaves as a resonant cavity generating waves at specific frequencies that depend on the background equilibrium model. Moreover, we conclude that the high frequency wave train generated at the null point is not necessarily a result of the interaction between the null point and a shock wave. This wave train can be also developed by the interaction between the null point and fast acoustic-like magneto-acoustic waves, that is, this interaction within the linear regime.
Holographic complexity in Vaidya spacetimes. Part I
NASA Astrophysics Data System (ADS)
Chapman, Shira; Marrochio, Hugo; Myers, Robert C.
2018-06-01
We examine holographic complexity in time-dependent Vaidya spacetimes with both the complexity=volume (CV) and complexity=action (CA) proposals. We focus on the evolution of the holographic complexity for a thin shell of null fluid, which collapses into empty AdS space and forms a (one-sided) black hole. In order to apply the CA approach, we introduce an action principle for the null fluid which sources the Vaidya geometries, and we carefully examine the contribution of the null shell to the action. Further, we find that adding a particular counterterm on the null boundaries of the Wheeler-DeWitt patch is essential if the gravitational action is to properly describe the complexity of the boundary state. For both the CV proposal and the CA proposal (with the extra boundary counterterm), the late time limit of the growth rate of the holographic complexity for the one-sided black hole is precisely the same as that found for an eternal black hole.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth Sarkis
1987-01-01
The correspondence between robotic manipulators and single gimbal Control Moment Gyro (CMG) systems was exploited to aid in the understanding and design of single gimbal CMG Steering laws. A test for null motion near a singular CMG configuration was derived which is able to distinguish between escapable and unescapable singular states. Detailed analysis of the Jacobian matrix null-space was performed and results were used to develop and test a variety of single gimbal CMG steering laws. Computer simulations showed that all existing singularity avoidance methods are unable to avoid Elliptic internal singularities. A new null motion algorithm using the Moore-Penrose pseudoinverse, however, was shown by simulation to avoid Elliptic type singularities under certain conditions. The SR-inverse, with appropriate null motion was proposed as a general approach to singularity avoidance, because of its ability to avoid singularities through limited introduction of torque error. Simulation results confirmed the superior performance of this method compared to the other available and proposed pseudoinverse-based Steering laws.
The continuum of hydroclimate variability in western North America during the last millennium
Ault, Toby R.; Cole, Julia E.; Overpeck, Jonathan T.; Pederson, Gregory T.; St. George, Scott; Otto-Bliesner, Bette; Woodhouse, Connie A.; Deser, Clara
2013-01-01
The distribution of climatic variance across the frequency spectrum has substantial importance for anticipating how climate will evolve in the future. Here we estimate power spectra and power laws (ß) from instrumental, proxy, and climate model data to characterize the hydroclimate continuum in western North America (WNA). We test the significance of our estimates of spectral densities and ß against the null hypothesis that they reflect solely the effects of local (non-climate) sources of autocorrelation at the monthly timescale. Although tree-ring based hydroclimate reconstructions are generally consistent with this null hypothesis, values of ß calculated from long-moisture sensitive chronologies (as opposed to reconstructions), and other types of hydroclimate proxies, exceed null expectations. We therefore argue that there is more low-frequency variability in hydroclimate than monthly autocorrelation alone can generate. Coupled model results archived as part of the Climate Model Intercomparison Project 5 (CMIP5) are consistent with the null hypothesis and appear unable to generate variance in hydroclimate commensurate with paleoclimate records. Consequently, at decadal to multidecadal timescales there is more variability in instrumental and proxy data than in the models, suggesting that the risk of prolonged droughts under climate change may be underestimated by CMIP5 simulations of the future.
NASA Technical Reports Server (NTRS)
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2013-01-01
A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.
Using Maximum Entropy to Find Patterns in Genomes
NASA Astrophysics Data System (ADS)
Liu, Sophia; Hockenberry, Adam; Lancichinetti, Andrea; Jewett, Michael; Amaral, Luis
The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. To accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. This approach can also be easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes. National Institute of General Medical Science, Northwestern University Presidential Fellowship, National Science Foundation, David and Lucile Packard Foundation, Camille Dreyfus Teacher Scholar Award.
Circumpulsar Asteroids: Inferences from Nulling Statistics and High Energy Correlations
NASA Astrophysics Data System (ADS)
Shannon, Ryan; Cordes, J. M.
2006-12-01
We have proposed that some classes of radio pulsar variability are associated with the entry of neutral asteroidal material into the pulsar magnetosphere. The region surrounding neutron stars is polluted with supernova fall-back material, which collapses and condenses into an asteroid-bearing disk that is stable for millions of years. Over time, collisional and radiative processes cause the asteroids to migrate inward until they are heated to the point of ionization. For older and cooler pulsars, asteroids ionize within the large magnetospheres and inject a sufficient amount of charged particles to alter the electrodynamics of the gap regions and modulate emission processes. This extrinsic model unifies many observed phenomena of variability that occur on time scales that are disparate with the much shorter time scales associated with pulsars and their magnetospheres. One such type of variability is nulling, in which certain pulsars exhibit episodes of quiescence that for some objects may be as short as a few pulse periods, but, for others, is longer than days. Here, in the context of this model, we examine the nulling phenomenon. We analyze the relationship between in-falling material and the statistics of nulling. In addition, as motivation for further high energy observations, we consider the relationship between the nulling and other magnetospheric processes.
Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong
2016-01-01
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471
The Many Null Distributions of Person Fit Indices.
ERIC Educational Resources Information Center
Molenaar, Ivo W.; Hoijtink, Herbert
1990-01-01
Statistical properties of person fit indices are reviewed as indicators of the extent to which a person's score pattern is in agreement with a measurement model. Distribution of a fit index and ability-free fit evaluation are discussed. The null distribution was simulated for a test of 20 items. (SLD)
Armanini, D G; Monk, W A; Carter, L; Cote, D; Baird, D J
2013-08-01
Evaluation of the ecological status of river sites in Canada is supported by building models using the reference condition approach. However, geography, data scarcity and inter-operability constraints have frustrated attempts to monitor national-scale status and trends. This issue is particularly true in Atlantic Canada, where no ecological assessment system is currently available. Here, we present a reference condition model based on the River Invertebrate Prediction and Classification System approach with regional-scale applicability. To achieve this, we used biological monitoring data collected from wadeable streams across Atlantic Canada together with freely available, nationally consistent geographic information system (GIS) environmental data layers. For the first time, we demonstrated that it is possible to use data generated from different studies, even when collected using different sampling methods, to generate a robust predictive model. This model was successfully generated and tested using GIS-based rather than local habitat variables and showed improved performance when compared to a null model. In addition, ecological quality ratio data derived from the model responded to observed stressors in a test dataset. Implications for future large-scale implementation of river biomonitoring using a standardised approach with global application are presented.
Behavior of the maximum likelihood in quantum state tomography
NASA Astrophysics Data System (ADS)
Scholten, Travis L.; Blume-Kohout, Robin
2018-02-01
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) should not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.
Behavior of the maximum likelihood in quantum state tomography
Blume-Kohout, Robin J; Scholten, Travis L.
2018-02-22
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
Behavior of the maximum likelihood in quantum state tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin J; Scholten, Travis L.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
The role of oxygen as a regulator of stem cell fate during fracture repair in TSP2-null mice.
Burke, Darren; Dishowitz, Michael; Sweetwyne, Mariya; Miedel, Emily; Hankenson, Kurt D; Kelly, Daniel J
2013-10-01
It is often difficult to decouple the relative importance of different factors in regulating MSC differentiation. Genetically modified mice provide model systems whereby some variables can be manipulated while others are kept constant. Fracture repair in thrombospondin-2 (TSP2)-null mice is characterized by reduced endochondral ossification and enhanced intramembranous bone formation. The proposed mechanism for this shift in MSC fate is that increased vascular density and hence oxygen availability in TSP2-null mice regulates differentiation. However, TSP2 is multifunctional and regulates other aspects of the regenerative cascade, such as MSC proliferation. The objective of this study is to use a previously developed computational model of tissue differentiation, in which substrate stiffness and oxygen tension regulate stem cell differentiation, to simulate potential mechanisms which may drive alterations in MSC fate in TSP2-null mice. Four models (increased cell proliferation, increased numbers of MSCs in the marrow decreased cellular oxygen consumption, and an initially stiffer callus) were not predictive of experimental observations in TSP2-null mice. In contrast, increasing the rate of angiogenic progression led to a prediction of greater intramembranous ossification, diminished endochondral ossification, and a reduced region of hypoxia in the fracture callus similar to that quantified experimentally by the immunohistochemical detection of pimonidazole adducts that develop with hypoxia. This study therefore provides further support for the hypothesis that oxygen availability during early fracture healing is a key regulator of MSC bipotential differentiation, and furthermore, it highlights the advantages of integrating computational models with genetically modified mouse studies for further elucidating mechanisms regulating stem cell fate. Copyright © 2013 Orthopaedic Research Society.
Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A
2015-09-21
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.
NASA Astrophysics Data System (ADS)
Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.
2015-09-01
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.
A SIGNIFICANCE TEST FOR THE LASSO1
Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert
2014-01-01
In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1). PMID:25574062
NASA Astrophysics Data System (ADS)
Caimmi, R.
2011-08-01
Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both heteroscedastic and homoscedastic data. Conversely, samples related to different methods produce discrepant results, due to the presence of (still undetected) systematic errors, which implies no definitive statement can be made at present. A comparison is also made between different expressions of regression line slope and intercept variance estimators, where fractional discrepancies are found to be not exceeding a few percent, which grows up to about 20% in the presence of large dispersion data. An extension of the formalism to structural models is left to a forthcoming paper.
Community Detection for Correlation Matrices
NASA Astrophysics Data System (ADS)
MacMahon, Mel; Garlaschelli, Diego
2015-04-01
A challenging problem in the study of complex systems is that of resolving, without prior information, the emergent, mesoscopic organization determined by groups of units whose dynamical activity is more strongly correlated internally than with the rest of the system. The existing techniques to filter correlations are not explicitly oriented towards identifying such modules and can suffer from an unavoidable information loss. A promising alternative is that of employing community detection techniques developed in network theory. Unfortunately, this approach has focused predominantly on replacing network data with correlation matrices, a procedure that we show to be intrinsically biased because of its inconsistency with the null hypotheses underlying the existing algorithms. Here, we introduce, via a consistent redefinition of null models based on random matrix theory, the appropriate correlation-based counterparts of the most popular community detection techniques. Our methods can filter out both unit-specific noise and system-wide dependencies, and the resulting communities are internally correlated and mutually anticorrelated. We also implement multiresolution and multifrequency approaches revealing hierarchically nested subcommunities with "hard" cores and "soft" peripheries. We apply our techniques to several financial time series and identify mesoscopic groups of stocks which are irreducible to a standard, sectorial taxonomy; detect "soft stocks" that alternate between communities; and discuss implications for portfolio optimization and risk management.
NASA Astrophysics Data System (ADS)
Lehmann, Rüdiger; Lösler, Michael
2017-12-01
Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.
A general methodology for population analysis
NASA Astrophysics Data System (ADS)
Lazov, Petar; Lazov, Igor
2014-12-01
For a given population with N - current and M - maximum number of entities, modeled by a Birth-Death Process (BDP) with size M+1, we introduce utilization parameter ρ, ratio of the primary birth and death rates in that BDP, which, physically, determines (equilibrium) macrostates of the population, and information parameter ν, which has an interpretation as population information stiffness. The BDP, modeling the population, is in the state n, n=0,1,…,M, if N=n. In presence of these two key metrics, applying continuity law, equilibrium balance equations concerning the probability distribution pn, n=0,1,…,M, of the quantity N, pn=Prob{N=n}, in equilibrium, and conservation law, and relying on the fundamental concepts population information and population entropy, we develop a general methodology for population analysis; thereto, by definition, population entropy is uncertainty, related to the population. In this approach, what is its essential contribution, the population information consists of three basic parts: elastic (Hooke's) or absorption/emission part, synchronization or inelastic part and null part; the first two parts, which determine uniquely the null part (the null part connects them), are the two basic components of the Information Spectrum of the population. Population entropy, as mean value of population information, follows this division of the information. A given population can function in information elastic, antielastic and inelastic regime. In an information linear population, the synchronization part of the information and entropy is absent. The population size, M+1, is the third key metric in this methodology. Namely, right supposing a population with infinite size, the most of the key quantities and results for populations with finite size, emerged in this methodology, vanish.
Percentiles of the null distribution of 2 maximum lod score tests.
Ulgen, Ayse; Yoo, Yun Joo; Gordon, Derek; Finch, Stephen J; Mendell, Nancy R
2004-01-01
We here consider the null distribution of the maximum lod score (LOD-M) obtained upon maximizing over transmission model parameters (penetrance values, dominance, and allele frequency) as well as the recombination fraction. Also considered is the lod score maximized over a fixed choice of genetic model parameters and recombination-fraction values set prior to the analysis (MMLS) as proposed by Hodge et al. The objective is to fit parametric distributions to MMLS and LOD-M. Our results are based on 3,600 simulations of samples of n = 100 nuclear families ascertained for having one affected member and at least one other sibling available for linkage analysis. Each null distribution is approximately a mixture p(2)(0) + (1 - p)(2)(v). The values of MMLS appear to fit the mixture 0.20(2)(0) + 0.80chi(2)(1.6). The mixture distribution 0.13(2)(0) + 0.87chi(2)(2.8). appears to describe the null distribution of LOD-M. From these results we derive a simple method for obtaining critical values of LOD-M and MMLS. Copyright 2004 S. Karger AG, Basel
Huang, J; Vieland, V J
2001-01-01
It is well known that the asymptotic null distribution of the homogeneity lod score (LOD) does not depend on the genetic model specified in the analysis. When appropriately rescaled, the LOD is asymptotically distributed as 0.5 chi(2)(0) + 0.5 chi(2)(1), regardless of the assumed trait model. However, because locus heterogeneity is a common phenomenon, the heterogeneity lod score (HLOD), rather than the LOD itself, is often used in gene mapping studies. We show here that, in contrast with the LOD, the asymptotic null distribution of the HLOD does depend upon the genetic model assumed in the analysis. In affected sib pair (ASP) data, this distribution can be worked out explicitly as (0.5 - c)chi(2)(0) + 0.5chi(2)(1) + cchi(2)(2), where c depends on the assumed trait model. E.g., for a simple dominant model (HLOD/D), c is a function of the disease allele frequency p: for p = 0.01, c = 0.0006; while for p = 0.1, c = 0.059. For a simple recessive model (HLOD/R), c = 0.098 independently of p. This latter (recessive) distribution turns out to be the same as the asymptotic distribution of the MLS statistic under the possible triangle constraint, which is asymptotically equivalent to the HLOD/R. The null distribution of the HLOD/D is close to that of the LOD, because the weight c on the chi(2)(2) component is small. These results mean that the cutoff value for a test of size alpha will tend to be smaller for the HLOD/D than the HLOD/R. For example, the alpha = 0.0001 cutoff (on the lod scale) for the HLOD/D with p = 0.05 is 3.01, while for the LOD it is 3.00, and for the HLOD/R it is 3.27. For general pedigrees, explicit analytical expression of the null HLOD distribution does not appear possible, but it will still depend on the assumed genetic model. Copyright 2001 S. Karger AG, Basel
Beatty, William S.; Webb, Elisabeth B.; Kesler, Dylan C.; Naylor, Luke W.; Raedeke, Andrew H.; Humburg, Dale D.; Coluccy, John M.; Soulliere, G.
2015-01-01
Bird conservation Joint Ventures are collaborative partnerships between public agencies and private organizations that facilitate habitat management to support waterfowl and other bird populations. A subset of Joint Ventures has developed energetic carrying capacity models (ECCs) to translate regional waterfowl population goals into habitat objectives during the non-breeding period. Energetic carrying capacity models consider food biomass, metabolism, and available habitat to estimate waterfowl carrying capacity within an area. To evaluate Joint Venture ECCs in the context of waterfowl space use, we monitored 33 female mallards (Anas platyrhynchos) and 55 female American black ducks (A. rubripes) using global positioning system satellite telemetry in the central and eastern United States. To quantify space use, we measured first-passage time (FPT: time required for an individual to transit across a circle of a given radius) at biologically relevant spatial scales for mallards (3.46 km) and American black ducks (2.30 km) during the non-breeding period, which included autumn migration, winter, and spring migration. We developed a series of models to predict FPT using Joint Venture ECCs and compared them to a biological null model that quantified habitat composition and a statistical null model, which included intercept and random terms. Energetic carrying capacity models predicted mallard space use more efficiently during autumn and spring migrations, but the statistical null was the top model for winter. For American black ducks, ECCs did not improve predictions of space use; the biological null was top ranked for winter and the statistical null was top ranked for spring migration. Thus, ECCs provided limited insight into predicting waterfowl space use during the non-breeding season. Refined estimates of spatial and temporal variation in food abundance, habitat conditions, and anthropogenic disturbance will likely improve ECCs and benefit conservation planners in linking non-breeding waterfowl habitat objectives with distribution and population parameters. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Null tests of the standard model using the linear model formalism
NASA Astrophysics Data System (ADS)
Marra, Valerio; Sapone, Domenico
2018-04-01
We test both the Friedmann-Lemaître-Robertson-Walker geometry and Λ CDM cosmology in a model-independent way by reconstructing the Hubble function H (z ), the comoving distance D (z ), and the growth of structure f σ8(z ) using the most recent data available. We use the linear model formalism in order to optimally reconstruct the above cosmological functions, together with their derivatives and integrals. We then evaluate four of the null tests available in the literature that probe both background and perturbation assumptions. For all the four tests, we find agreement, within the errors, with the standard cosmological model.
Demographic inference under the coalescent in a spatial continuum.
Guindon, Stéphane; Guo, Hongbin; Welch, David
2016-10-01
Understanding population dynamics from the analysis of molecular and spatial data requires sound statistical modeling. Current approaches assume that populations are naturally partitioned into discrete demes, thereby failing to be relevant in cases where individuals are scattered on a spatial continuum. Other models predict the formation of increasingly tight clusters of individuals in space, which, again, conflicts with biological evidence. Building on recent theoretical work, we introduce a new genealogy-based inference framework that alleviates these issues. This approach effectively implements a stochastic model in which the distribution of individuals is homogeneous and stationary, thereby providing a relevant null model for the fluctuation of genetic diversity in time and space. Importantly, the spatial density of individuals in a population and their range of dispersal during the course of evolution are two parameters that can be inferred separately with this method. The validity of the new inference framework is confirmed with extensive simulations and the analysis of influenza sequences collected over five seasons in the USA. Copyright © 2016 Elsevier Inc. All rights reserved.
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
NASA Astrophysics Data System (ADS)
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity
NASA Astrophysics Data System (ADS)
Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.
As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.
Exploring the structure and function of temporal networks with dynamic graphlets
Hulovatyy, Y.; Chen, H.; Milenković, T.
2015-01-01
Motivation: With increasing availability of temporal real-world networks, how to efficiently study these data? One can model a temporal network as a single aggregate static network, or as a series of time-specific snapshots, each being an aggregate static network over the corresponding time window. Then, one can use established methods for static analysis on the resulting aggregate network(s), but losing in the process valuable temporal information either completely, or at the interface between different snapshots, respectively. Here, we develop a novel approach for studying a temporal network more explicitly, by capturing inter-snapshot relationships. Results: We base our methodology on well-established graphlets (subgraphs), which have been proven in numerous contexts in static network research. We develop new theory to allow for graphlet-based analyses of temporal networks. Our new notion of dynamic graphlets is different from existing dynamic network approaches that are based on temporal motifs (statistically significant subgraphs). The latter have limitations: their results depend on the choice of a null network model that is required to evaluate the significance of a subgraph, and choosing a good null model is non-trivial. Our dynamic graphlets overcome the limitations of the temporal motifs. Also, when we aim to characterize the structure and function of an entire temporal network or of individual nodes, our dynamic graphlets outperform the static graphlets. Clearly, accounting for temporal information helps. We apply dynamic graphlets to temporal age-specific molecular network data to deepen our limited knowledge about human aging. Availability and implementation: http://www.nd.edu/∼cone/DG. Contact: tmilenko@nd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072480
Deficiency of bone marrow beta3-integrin enhances non-functional neovascularization.
Watson, Alan R; Pitchford, Simon C; Reynolds, Louise E; Direkze, Natalie; Brittan, Mairi; Alison, Malcolm R; Rankin, Sara; Wright, Nicholas A; Hodivala-Dilke, Kairbaan M
2010-03-01
beta3-Integrin is a cell surface adhesion and signalling molecule important in the regulation of tumour angiogenesis. Mice with a global deficiency in beta3-integrin show increased pathological angiogenesis, most likely due to increased vascular endothelial growth factor receptor 2 expression on beta3-null endothelial cells. Here we transplanted beta3-null bone marrow (BM) into wild-type (WT) mice to dissect the role of BM beta3-integrin deficiency in pathological angiogenesis. Mice transplanted with beta3-null bone marrow show significantly enhanced angiogenesis in subcutaneous B16F0 melanoma and Lewis lung carcinoma (LLC) cell models and in B16F0 melanoma lung metastasis when compared with tumours grown in mice transplanted with WT bone marrow. The effect of bone marrow beta3-integrin deficiency was also assessed in the RIPTAg mouse model of pancreatic tumour growth. Again, angiogenesis in mice lacking BM beta3-integrin was enhanced. However, tumour weight between the groups was not significantly altered, suggesting that the enhanced blood vessel density in the mice transplanted with beta3-null bone marrow was not functional. Indeed, we demonstrate that in mice transplanted with beta3-null bone marrow a significant proportion of tumour blood vessels are non-functional when compared with tumour blood vessels in WT-transplanted controls. Furthermore, beta3-null-transplanted mice showed an increased angiogenic response to VEGF in vivo when compared with WT-transplanted animals. BM beta3-integrin deficiency affects the mobilization of progenitor cells to the peripheral circulation. We show that VEGF-induced mobilization of endothelial progenitor cells is enhanced in mice transplanted with beta3-null bone marrow when compared with WT-transplanted controls, suggesting a possible mechanism underlying the increased blood vessel density seen in beta3-null-transplanted mice. In conclusion, although BM beta3-integrin is not required for pathological angiogenesis, our studies demonstrate a role for BM beta3-integrin in VEGF-induced mobilization of bone marrow-derived cells to the peripheral circulation and for the functionality of those vessels in which BM-derived cells become incorporated.
Map LineUps: Effects of spatial structure on graphical inference.
Beecham, Roger; Dykes, Jason; Meulemans, Wouter; Slingsby, Aidan; Turkay, Cagatay; Wood, Jo
2017-01-01
Fundamental to the effective use of visualization as an analytic and descriptive tool is the assurance that presenting data visually provides the capability of making inferences from what we see. This paper explores two related approaches to quantifying the confidence we may have in making visual inferences from mapped geospatial data. We adapt Wickham et al.'s 'Visual Line-up' method as a direct analogy with Null Hypothesis Significance Testing (NHST) and propose a new approach for generating more credible spatial null hypotheses. Rather than using as a spatial null hypothesis the unrealistic assumption of complete spatial randomness, we propose spatially autocorrelated simulations as alternative nulls. We conduct a set of crowdsourced experiments (n=361) to determine the just noticeable difference (JND) between pairs of choropleth maps of geographic units controlling for spatial autocorrelation (Moran's I statistic) and geometric configuration (variance in spatial unit area). Results indicate that people's abilities to perceive differences in spatial autocorrelation vary with baseline autocorrelation structure and the geometric configuration of geographic units. These results allow us, for the first time, to construct a visual equivalent of statistical power for geospatial data. Our JND results add to those provided in recent years by Klippel et al. (2011), Harrison et al. (2014) and Kay & Heer (2015) for correlation visualization. Importantly, they provide an empirical basis for an improved construction of visual line-ups for maps and the development of theory to inform geospatial tests of graphical inference.
Lahvis, Garet P; Pyzalski, Robert W; Glover, Edward; Pitot, Henry C; McElwee, Matthew K; Bradfield, Christopher A
2005-03-01
A developmental role for the Ahr locus has been indicated by the observation that mice harboring a null allele display a portocaval vascular shunt throughout life. To define the ontogeny and determine the identity of this shunt, we developed a visualization approach in which three-dimensional (3D) images of the developing liver vasculature are generated from serial sections. Applying this 3D visualization approach at multiple developmental times allowed us to demonstrate that the portocaval shunt observed in Ahr-null mice is the remnant of an embryonic structure and is not acquired after birth. We observed that the shunt is found in late-stage wild-type embryos but closes during the first 48 h of postnatal life. In contrast, the same structure fails to close in Ahr-null mice and remains open throughout adulthood. The ontogeny of this shunt, along with its 3D position, allowed us to conclude that this shunt is a patent developmental structure known as the ductus venosus (DV). Upon searching for a physiological cause of the patent DV, we observed that during the first 48 h, most major hepatic veins, such as the portal and umbilical veins, normally decrease in diameter but do not change in Ahr-null mice. This observation suggests that failure of the DV to close may be the consequence of increased blood pressure or a failure in vasoconstriction in the developing liver.
NASA Astrophysics Data System (ADS)
Frauendiener, Jörg; Hennig, Jörg
2018-03-01
We extend earlier numerical and analytical considerations of the conformally invariant wave equation on a Schwarzschild background from the case of spherically symmetric solutions, discussed in Frauendiener and Hennig (2017 Class. Quantum Grav. 34 045005), to the case of general, nonsymmetric solutions. A key element of our approach is the modern standard representation of spacelike infinity as a cylinder. With a decomposition into spherical harmonics, we reduce the four-dimensional wave equation to a family of two-dimensional equations. These equations can be used to study the behaviour at the cylinder, where the solutions turn out to have, in general, logarithmic singularities at infinitely many orders. We derive regularity conditions that may be imposed on the initial data, in order to avoid the first singular terms. We then demonstrate that the fully pseudospectral time evolution scheme can be applied to this problem leading to a highly accurate numerical reconstruction of the nonsymmetric solutions. We are particularly interested in the behaviour of the solutions at future null infinity, and we numerically show that the singularities spread to null infinity from the critical set, where the cylinder approaches null infinity. The observed numerical behaviour is consistent with similar logarithmic singularities found analytically on the critical set. Finally, we demonstrate that even solutions with singularities at low orders can be obtained with high accuracy by virtue of a coordinate transformation that converts solutions with logarithmic singularities into smooth solutions.
El-Hoss, Jad; Sullivan, Kate; Cheng, Tegan; Yu, Nicole Y C; Bobyn, Justin D; Peacock, Lauren; Mikulec, Kathy; Baldock, Paul; Alexander, Ian E; Schindeler, Aaron; Little, David G
2012-01-01
Neurofibromatosis type 1 (NF1) is a common genetic condition caused by mutations in the NF1 gene. Patients often suffer from tissue-specific lesions associated with local double-inactivation of NF1. In this study, we generated a novel fracture model to investigate the mechanism underlying congenital pseudarthrosis of the tibia (CPT) associated with NF1. We used a Cre-expressing adenovirus (AdCre) to inactivate Nf1 in vitro in cultured osteoprogenitors and osteoblasts, and in vivo in the fracture callus of Nf1(flox/flox) and Nf1(flox/-) mice. The effects of the presence of Nf1(null) cells were extensively examined. Cultured Nf1(null)-committed osteoprogenitors from neonatal calvaria failed to differentiate and express mature osteoblastic markers, even with recombinant bone morphogenetic protein-2 (rhBMP-2) treatment. Similarly, Nf1(null)-inducible osteoprogenitors obtained from Nf1 MyoDnull mouse muscle were also unresponsive to rhBMP-2. In both closed and open fracture models in Nf1(flox/flox) and Nf1(flox/-) mice, local AdCre injection significantly impaired bone healing, with fracture union being <50% that of wild type controls. No significant difference was seen between Nf1(flox/flox) and Nf1(flox/-) mice. Histological analyses showed invasion of the Nf1(null) fractures by fibrous and highly proliferative tissue. Mean amounts of fibrous tissue were increased upward of 10-fold in Nf1(null) fractures and bromodeoxyuridine (BrdU) staining in closed fractures showed increased numbers of proliferating cells. In Nf1(null) fractures, tartrate-resistant acid phosphatase-positive (TRAP+) cells were frequently observed within the fibrous tissue, not lining a bone surface. In summary, we report that local Nf1 deletion in a fracture callus is sufficient to impair bony union and recapitulate histological features of clinical CPT. Cell culture findings support the concept that Nf1 double inactivation impairs early osteoblastic differentiation. This model provides valuable insight into the pathobiology of the disease, and will be helpful for trialing therapeutic compounds. Copyright © 2012 American Society for Bone and Mineral Research.
Asymptotic symmetries of Rindler space at the horizon and null infinity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Hyeyoun
2010-08-15
We investigate the asymptotic symmetries of Rindler space at null infinity and at the event horizon using both systematic and ad hoc methods. We find that the approaches that yield infinite-dimensional asymptotic symmetry algebras in the case of anti-de Sitter and flat spaces only give a finite-dimensional algebra for Rindler space at null infinity. We calculate the charges corresponding to these symmetries and confirm that they are finite, conserved, and integrable, and that the algebra of charges gives a representation of the asymptotic symmetry algebra. We also use relaxed boundary conditions to find infinite-dimensional asymptotic symmetry algebras for Rindler spacemore » at null infinity and at the event horizon. We compute the charges corresponding to these symmetries and confirm that they are finite and integrable. We also determine sufficient conditions for the charges to be conserved on-shell, and for the charge algebra to give a representation of the asymptotic symmetry algebra. In all cases, we find that the central extension of the charge algebra is trivial.« less
Papers in Syntax. Working Papers in Linguistics No. 42.
ERIC Educational Resources Information Center
Kathol, Andreas, Ed.; Pollard, Carl, Ed.
1993-01-01
This collection of working papers in syntax includes: "Null Objects in Mandarin Chinese" (Christie Block); "Toward a Linearization-Based Approach to Word Order Variation in Japanese" (Mike Calcagno); "A Lexical Approach to Inalienable Possession Constructions in Korean" (Chung, Chan); "Chinese NP Structure"…
Distribution of lod scores in oligogenic linkage analysis.
Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J
2001-01-01
In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.
Normalization, bias correction, and peak calling for ChIP-seq
Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.
2012-01-01
Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706
Requirements Formulation and Dynamic Jitter Analysis for Fourier-Kelvin Stellar Interferometer
NASA Technical Reports Server (NTRS)
Liu, Kuo-Chia; Hyde, Tristram; Blaurock, Carl; Bolognese, Jeff; Howard, Joseph; Danchi, William
2004-01-01
The Fourier-Kelvin Stellar Interferometer (FKSI) has been proposed to detect and characterize extra solar giant planets. The baseline configuration for FKSI is a two- aperture, structurally connected nulling interferometer, capable of providing null depth less than lo4 in the infrared. The objective of this paper is to summarize the process for setting the top level requirements and the jitter analysis performed on FKSI to date. The first part of the paper discusses the derivation of dynamic stability requirements, necessary for meeting the FKSI nulling demands. An integrated model including structures, optics, and control systems has been developed to support dynamic jitter analysis and requirements verification. The second part of the paper describes how the integrated model is used to investigate the effects of reaction wheel disturbances on pointing and optical path difference stabilities.
Chiba, Yasutaka
2017-09-01
Fisher's exact test is commonly used to compare two groups when the outcome is binary in randomized trials. In the context of causal inference, this test explores the sharp causal null hypothesis (i.e. the causal effect of treatment is the same for all subjects), but not the weak causal null hypothesis (i.e. the causal risks are the same in the two groups). Therefore, in general, rejection of the null hypothesis by Fisher's exact test does not mean that the causal risk difference is not zero. Recently, Chiba (Journal of Biometrics and Biostatistics 2015; 6: 244) developed a new exact test for the weak causal null hypothesis when the outcome is binary in randomized trials; the new test is not based on any large sample theory and does not require any assumption. In this paper, we extend the new test; we create a version of the test applicable to a stratified analysis. The stratified exact test that we propose is general in nature and can be used in several approaches toward the estimation of treatment effects after adjusting for stratification factors. The stratified Fisher's exact test of Jung (Biometrical Journal 2014; 56: 129-140) tests the sharp causal null hypothesis. This test applies a crude estimator of the treatment effect and can be regarded as a special case of our proposed exact test. Our proposed stratified exact test can be straightforwardly extended to analysis of noninferiority trials and to construct the associated confidence interval. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Son, Heesook; Friedmann, Erika; Thomas, Sue A
2012-01-01
Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.
Control of recollection by slow gamma dominating mid-frequency gamma in hippocampus CA1
Dvorak, Dino; Radwan, Basma; Sparks, Fraser T.; Talbot, Zoe Nicole
2018-01-01
Behavior is used to assess memory and cognitive deficits in animals like Fmr1-null mice that model Fragile X Syndrome, but behavior is a proxy for unknown neural events that define cognitive variables like recollection. We identified an electrophysiological signature of recollection in mouse dorsal Cornu Ammonis 1 (CA1) hippocampus. During a shocked-place avoidance task, slow gamma (SG) (30–50 Hz) dominates mid-frequency gamma (MG) (70–90 Hz) oscillations 2–3 s before successful avoidance, but not failures. Wild-type (WT) but not Fmr1-null mice rapidly adapt to relocating the shock; concurrently, SG/MG maxima (SGdom) decrease in WT but not in cognitively inflexible Fmr1-null mice. During SGdom, putative pyramidal cell ensembles represent distant locations; during place avoidance, these are avoided places. During shock relocation, WT ensembles represent distant locations near the currently correct shock zone, but Fmr1-null ensembles represent the formerly correct zone. These findings indicate that recollection occurs when CA1 SG dominates MG and that accurate recollection of inappropriate memories explains Fmr1-null cognitive inflexibility. PMID:29346381
Dictyostelium LvsB has a regulatory role in endosomal vesicle fusion
Falkenstein, Kristin; De Lozanne, Arturo
2014-01-01
ABSTRACT Defects in human lysosomal-trafficking regulator (Lyst) are associated with the lysosomal disorder Chediak–Higashi syndrome. The absence of Lyst results in the formation of enlarged lysosome-related compartments, but the mechanism for how these compartments arise is not well established. Two opposing models have been proposed to explain Lyst function. The fission model describes Lyst as a positive regulator of fission from lysosomal compartments, whereas the fusion model identifies Lyst as a negative regulator of fusion between lysosomal vesicles. Here, we used assays that can distinguish between defects in vesicle fusion versus fission. We compared the phenotype of Dictyostelium discoideum cells defective in LvsB, the ortholog of Lyst, with that of two known fission defect mutants (μ3- and WASH-null mutants). We found that the temporal localization characteristics of the post-lysosomal marker vacuolin, as well as vesicular acidity and the fusion dynamics of LvsB-null cells are distinct from those of both μ3- and WASH-null fission defect mutants. These distinctions are predicted by the fusion defect model and implicate LvsB as a negative regulator of vesicle fusion. PMID:25086066
Liu, Yi; Zhang, Cuiping; Li, Zhenyu; Wang, Chi; Jia, Jianhang; Gao, Tianyan; Hildebrandt, Gerhard; Zhou, Daohong; Bondada, Subbarao; Ji, Peng; St Clair, Daret; Liu, Jinze; Zhan, Changguo; Geiger, Hartmut; Wang, Shuxia; Liang, Ying
2017-04-11
Natural genetic diversity offers an important yet largely untapped resource to decipher the molecular mechanisms regulating hematopoietic stem cell (HSC) function. Latexin (Lxn) is a negative stem cell regulatory gene identified on the basis of genetic diversity. By using an Lxn knockout mouse model, we found that Lxn inactivation in vivo led to the physiological expansion of the entire hematopoietic hierarchy. Loss of Lxn enhanced the competitive repopulation capacity and survival of HSCs in a cell-intrinsic manner. Gene profiling of Lxn-null HSCs showed altered expression of genes enriched in cell-matrix and cell-cell interactions. Thrombospondin 1 (Thbs1) was a potential downstream target with a dramatic downregulation in Lxn-null HSCs. Enforced expression of Thbs1 restored the Lxn inactivation-mediated HSC phenotypes. This study reveals that Lxn plays an important role in the maintenance of homeostatic hematopoiesis, and it may lead to development of safe and effective approaches to manipulate HSCs for clinical benefit. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
A New Model to Study the Role of Arachidonic Acid in Colon Cancer Pathophysiology.
Fan, Yang-Yi; Callaway, Evelyn; M Monk, Jennifer; S Goldsby, Jennifer; Yang, Peiying; Vincent, Logan; S Chapkin, Robert
2016-09-01
A significant increase in cyclooxygenase 2 (COX2) gene expression has been shown to promote cylcooxygenase-dependent colon cancer development. Controversy associated with the role of COX2 inhibitors indicates that additional work is needed to elucidate the effects of arachidonic acid (AA)-derived (cyclooxygenase and lipoxygenase) eicosanoids in cancer initiation, progression, and metastasis. We have recently developed a novel Fads1 knockout mouse model that allows for the investigation of AA-dependent eicosanoid deficiency without the complication of essential fatty acid deficiency. Interestingly, the survival rate of Fads1-null mice is severely compromised after 2 months on a semi-purified AA-free diet, which precludes long-term chemoprevention studies. Therefore, in this study, dietary AA levels were titrated to determine the minimal level required for survival, while maintaining a distinct AA-deficient phenotype. Null mice supplemented with AA (0.1%, 0.4%, 0.6%, 2.0%, w/w) in the diet exhibited a dose-dependent increase (P < 0.05) in AA, PGE2, 6-keto PGF1α, TXB2, and EdU-positive proliferative cells in the colon. In subsequent experiments, null mice supplemented with 0.6% AA diet were injected with a colon-specific carcinogen (azoxymethane) in order to assess cancer susceptibility. Null mice exhibited significantly (P < 0.05) reduced levels/multiplicity of aberrant crypt foci (ACF) as compared with wild-type sibling littermate control mice. These data indicate that (i) basal/minimal dietary AA supplementation (0.6%) expands the utility of the Fads1-null mouse model for long-term cancer prevention studies and (ii) that AA content in the colonic epithelium modulates colon cancer risk. Cancer Prev Res; 9(9); 750-7. ©2016 AACR. ©2016 American Association for Cancer Research.
Esteve-Altava, Borja; Rasskin-Gutman, Diego
2014-01-01
Craniofacial sutures and synchondroses form the boundaries among bones in the human skull, providing functional, developmental and evolutionary information. Bone articulations in the skull arise due to interactions between genetic regulatory mechanisms and epigenetic factors such as functional matrices (soft tissues and cranial cavities), which mediate bone growth. These matrices are largely acknowledged for their influence on shaping the bones of the skull; however, it is not fully understood to what extent functional matrices mediate the formation of bone articulations. Aiming to identify whether or not functional matrices are key developmental factors guiding the formation of bone articulations, we have built a network null model of the skull that simulates unconstrained bone growth. This null model predicts bone articulations that arise due to a process of bone growth that is uniform in rate, direction and timing. By comparing predicted articulations with the actual bone articulations of the human skull, we have identified which boundaries specifically need the presence of functional matrices for their formation. We show that functional matrices are necessary to connect facial bones, whereas an unconstrained bone growth is sufficient to connect non-facial bones. This finding challenges the role of the brain in the formation of boundaries between bones in the braincase without neglecting its effect on skull shape. Ultimately, our null model suggests where to look for modified developmental mechanisms promoting changes in bone growth patterns that could affect the development and evolution of the head skeleton. PMID:24975579
Zahradnicek, Oldrich; Horacek, Ivan; Tucker, Abigail S
2012-01-01
This paper describes tooth development in a basal squamate, Paroedura picta. Due to its reproductive strategy, mode of development and position within the reptiles, this gecko represents an excellent model organism for the study of reptile development. Here we document the dental pattern and development of non-functional (null generation) and functional generations of teeth during embryonic development. Tooth development is followed from initiation to cytodifferentiation and ankylosis, as the tooth germs develop from bud, through cap to bell stages. The fate of the single generation of non-functional (null generation) teeth is shown to be variable, with some teeth being expelled from the oral cavity, while others are incorporated into the functional bone and teeth, or are absorbed. Fate appears to depend on the initiation site within the oral cavity, with the first null generation teeth forming before formation of the dental lamina. We show evidence for a stratum intermedium layer in the enamel epithelium of functional teeth and show that the bicuspid shape of the teeth is created by asymmetrical deposition of enamel, and not by folding of the inner dental epithelium as observed in mammals. PMID:22780101
Power Enhancement in High Dimensional Cross-Sectional Tests
Fan, Jianqing; Liao, Yuan; Yao, Jiawei
2016-01-01
We propose a novel technique to boost the power of testing a high-dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated only by a couple of components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high-dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme-value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component”, which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. As specific applications, the proposed methods are applied to testing the factor pricing models and validating the cross-sectional independence in panel data models. PMID:26778846
On joint subtree distributions under two evolutionary models.
Wu, Taoyang; Choi, Kwok Pui
2016-04-01
In population and evolutionary biology, hypotheses about micro-evolutionary and macro-evolutionary processes are commonly tested by comparing the shape indices of empirical evolutionary trees with those predicted by neutral models. A key ingredient in this approach is the ability to compute and quantify distributions of various tree shape indices under random models of interest. As a step to meet this challenge, in this paper we investigate the joint distribution of cherries and pitchforks (that is, subtrees with two and three leaves) under two widely used null models: the Yule-Harding-Kingman (YHK) model and the proportional to distinguishable arrangements (PDA) model. Based on two novel recursive formulae, we propose a dynamic approach to numerically compute the exact joint distribution (and hence the marginal distributions) for trees of any size. We also obtained insights into the statistical properties of trees generated under these two models, including a constant correlation between the cherry and the pitchfork distributions under the YHK model, and the log-concavity and unimodality of the cherry distributions under both models. In addition, we show that there exists a unique change point for the cherry distributions between these two models. Copyright © 2015 Elsevier Inc. All rights reserved.
Kobayashi, Sumitaka; Sata, Fumihiro; Miyashita, Chihiro; Sasaki, Seiko; Ban, Susumu; Araki, Atsuko; Goudarzi, Houman; Kajiwara, Jumboku; Todaka, Takashi; Kishi, Reiko
2017-01-01
We investigated the effects of maternal polymorphisms in 3 genes encoding dioxin-metabolizing enzymes in relation to prenatal dioxin levels on infant birth size in Japan. We examined the relationship between dioxin exposure and birth size in relation to the polymorphisms in the genes encoding aromatic hydrocarbon receptor (AHR [G>A, Arg554Lys]), cytochrome P450 (CYP) 1A1 (T6235C), and glutathione S-transferase mu 1 (GSTM1; Non-null/null) in 421 participants using multiple linear regression models. In mothers carrying the GSTM1 null genotype, a ten-fold increase in total dioxin toxic equivalency was correlated with a decrease in birth weight of -345g (95% confidence interval: -584, -105). We observed adverse effects of maternal GSTM1 null genotype on birth weight in the presence of dioxins exposure during pregnancy. Copyright © 2016 Elsevier Inc. All rights reserved.
Kv7.2 regulates the function of peripheral sensory neurons.
King, Chih H; Lancaster, Eric; Salomon, Daniela; Peles, Elior; Scherer, Steven S
2014-10-01
The Kv7 (KCNQ) family of voltage-gated K(+) channels regulates cellular excitability. The functional role of Kv7.2 has been hampered by the lack of a viable Kcnq2-null animal model. In this study, we generated homozygous Kcnq2-null sensory neurons using the Cre-Lox system; in these mice, Kv7.2 expression is absent in the peripheral sensory neurons, whereas the expression of other molecular components of nodes (including Kv7.3), paranodes, and juxtaparanodes is not altered. The conditional Kcnq2-null animals exhibit normal motor performance but have increased thermal hyperalgesia and mechanical allodynia. Whole-cell patch recording technique demonstrates that Kcnq2-null sensory neurons have increased excitability and reduced spike frequency adaptation. Taken together, our results suggest that the loss of Kv7.2 activity increases the excitability of primary sensory neurons. © 2014 Wiley Periodicals, Inc.
Castilla-Ortega, Estela; Pavón, Francisco Javier; Sánchez-Marín, Laura; Estivill-Torrús, Guillermo; Pedraza, Carmen; Blanco, Eduardo; Suárez, Juan; Santín, Luis; Rodríguez de Fonseca, Fernando; Serrano, Antonia
2016-04-01
Lysophosphatidic acid species (LPA) are lipid bioactive signaling molecules that have been recently implicated in the modulation of emotional and motivational behaviors. The present study investigates the consequences of either genetic deletion or pharmacological blockade of lysophosphatidic acid receptor-1 (LPA1) in alcohol consumption. The experiments were performed in alcohol-drinking animals by using LPA1-null mice and administering the LPA1 receptor antagonist Ki16425 in both mice and rats. In the two-bottle free choice paradigm, the LPA1-null mice preferred the alcohol more than their wild-type counterparts. Whereas the male LPA1-null mice displayed this higher preference at all doses tested, the female LPA1-null mice only consumed more alcohol at 6% concentration. The male LPA1-null mice were then further characterized, showing a notably increased ethanol drinking after a deprivation period and a reduced sleep time after acute ethanol administration. In addition, LPA1-null mice were more anxious than the wild-type mice in the elevated plus maze test. For the pharmacological experiments, the acute administration of the antagonist Ki16425 consistently increased ethanol consumption in both wild-type mice and rats; while it did not modulate alcohol drinking in the LPA1-null mice and lacked intrinsic rewarding properties and locomotor effects in a conditioned place preference paradigm. In addition, LPA1-null mice exhibited a marked reduction on the expression of glutamate-transmission-related genes in the prefrontal cortex similar to those described in alcohol-exposed rodents. Results suggest a relevant role for the LPA/LPA1 signaling system in alcoholism. In addition, the LPA1-null mice emerge as a new model for genetic vulnerability to excessive alcohol drinking. The pharmacological manipulation of LPA1 receptor arises as a new target for the study and treatment of alcoholism. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reconnection at three dimensional magnetic null points: Effect of current sheet asymmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wyper, P. F.; Jain, Rekha
2013-05-15
Asymmetric current sheets are likely to be prevalent in both astrophysical and laboratory plasmas with complex three dimensional (3D) magnetic topologies. This work presents kinematic analytical models for spine and fan reconnection at a radially symmetric 3D null (i.e., a null where the eigenvalues associated with the fan plane are equal) with asymmetric current sheets. Asymmetric fan reconnection is characterized by an asymmetric reconnection of flux past each spine line and a bulk flow of plasma across the null point. In contrast, asymmetric spine reconnection is characterized by the reconnection of an equal quantity of flux across the fan planemore » in both directions. The higher modes of spine reconnection also include localized wedges of vortical flux transport in each half of the fan. In this situation, two definitions for reconnection rate become appropriate: a local reconnection rate quantifying how much flux is genuinely reconnected across the fan plane and a global rate associated with the net flux driven across each semi-plane. Through a scaling analysis, it is shown that when the ohmic dissipation in the layer is assumed to be constant, the increase in the local rate bleeds from the global rate as the sheet deformation is increased. Both models suggest that asymmetry in the current sheet dimensions will have a profound effect on the reconnection rate and manner of flux transport in reconnection involving 3D nulls.« less
Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong
2016-04-07
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
A Fluid Structure Algorithm with Lagrange Multipliers to Model Free Swimming
NASA Astrophysics Data System (ADS)
Sahin, Mehmet; Dilek, Ezgi
2017-11-01
A new monolithic approach is prosed to solve the fluid-structure interaction (FSI) problem with Lagrange multipliers in order to model free swimming/flying. In the present approach, the fluid domain is modeled by the incompressible Navier-Stokes equations and discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the stable side-centered unstructured finite volume method. The solid domain is modeled by the constitutive laws for the nonlinear Saint Venant-Kirchhoff material and the classical Galerkin finite element method is used to discretize the governing equations in a Lagrangian frame. In order to impose the body motion/deformation, the distance between the constraint pair nodes is imposed using the Lagrange multipliers, which is independent from the frame of reference. The resulting algebraic linear equations are solved in a fully coupled manner using a dual approach (null space method). The present numerical algorithm is initially validated for the classical FSI benchmark problems and then applied to the free swimming of three linked ellipses. The authors are grateful for the use of the computing resources provided by the National Center for High Performance Computing (UYBHM) under Grant Number 10752009 and the computing facilities at TUBITAK-ULAKBIM, High Performance and Grid Computing Center.
Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features
NASA Astrophysics Data System (ADS)
Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique
2011-12-01
We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.
Vacuum Nuller Testbed Performance, Characterization and Null Control
NASA Technical Reports Server (NTRS)
Lyon, R. G.; Clampin, M.; Petrone, P.; Mallik, U.; Madison, T.; Bolcar, M.; Noecker, C.; Kendrick, S.; Helmbrecht, M. A.
2011-01-01
The Visible Nulling Coronagraph (VNC) can detect and characterize exoplanets with filled, segmented and sparse aperture telescopes, thereby spanning the choice of future internal coronagraph exoplanet missions. NASA/Goddard Space Flight Center (GSFC) has developed a Vacuum Nuller Testbed (VNT) to advance this approach, and assess and advance technologies needed to realize a VNC as a flight instrument. The VNT is an ultra-stable testbed operating at 15 Hz in vacuum. It consists of a MachZehnder nulling interferometer; modified with a "W" configuration to accommodate a hexpacked MEMS based deformable mirror (DM), coherent fiber bundle and achromatic phase shifters. The 2-output channels are imaged with a vacuum photon counting camera and conventional camera. Error-sensing and feedback to DM and delay line with control algorithms are implemented in a real-time architecture. The inherent advantage of the VNC is that it is its own interferometer and directly controls its errors by exploiting images from bright and dark channels simultaneously. Conservation of energy requires the sum total of the photon counts be conserved independent of the VNC state. Thus sensing and control bandwidth is limited by the target stars throughput, with the net effect that the higher bandwidth offloads stressing stability tolerances within the telescope. We report our recent progress with the VNT towards achieving an incremental sequence of contrast milestones of 10(exp 8) , 10(exp 9) and 10(exp 10) respectively at inner working angles approaching 2A/D. Discussed will be the optics, lab results, technologies, and null control. Shown will be evidence that the milestones have been achieved.
Ali, Niwa; Flutter, Barry; Sanchez Rodriguez, Robert; Sharif-Paghaleh, Ehsan; Barber, Linda D; Lombardi, Giovanna; Nestle, Frank O
2012-01-01
The occurrence of Graft-versus-Host Disease (GvHD) is a prevalent and potentially lethal complication that develops following hematopoietic stem cell transplantation. Humanized mouse models of xenogeneic-GvHD based upon immunodeficient strains injected with human peripheral blood mononuclear cells (PBMC; "Hu-PBMC mice") are important tools to study human immune function in vivo. The recent introduction of targeted deletions at the interleukin-2 common gamma chain (IL-2Rγ(null)), notably the NOD-scid IL-2Rγ(null) (NSG) and BALB/c-Rag2(null) IL-2Rγ(null) (BRG) mice, has led to improved human cell engraftment. Despite their widespread use, a comprehensive characterisation of engraftment and GvHD development in the Hu-PBMC NSG and BRG models has never been performed in parallel. We compared engrafted human lymphocyte populations in the peripheral blood, spleens, lymph nodes and bone marrow of these mice. Kinetics of engraftment differed between the two strains, in particular a significantly faster expansion of the human CD45(+) compartment and higher engraftment levels of CD3(+) T-cells were observed in NSG mice, which may explain the faster rate of GvHD development in this model. The pathogenesis of human GvHD involves anti-host effector cell reactivity and cutaneous tissue infiltration. Despite this, the presence of T-cell subsets and tissue homing markers has only recently been characterised in the peripheral blood of patients and has never been properly defined in Hu-PBMC models of GvHD. Engrafted human cells in NSG mice shows a prevalence of tissue homing cells with a T-effector memory (T(EM)) phenotype and high levels of cutaneous lymphocyte antigen (CLA) expression. Characterization of Hu-PBMC mice provides a strong preclinical platform for the application of novel immunotherapies targeting T(EM)-cell driven GvHD.
Ali, Niwa; Flutter, Barry; Sanchez Rodriguez, Robert; Sharif-Paghaleh, Ehsan; Barber, Linda D.; Lombardi, Giovanna; Nestle, Frank O.
2012-01-01
The occurrence of Graft-versus-Host Disease (GvHD) is a prevalent and potentially lethal complication that develops following hematopoietic stem cell transplantation. Humanized mouse models of xenogeneic-GvHD based upon immunodeficient strains injected with human peripheral blood mononuclear cells (PBMC; “Hu-PBMC mice”) are important tools to study human immune function in vivo. The recent introduction of targeted deletions at the interleukin-2 common gamma chain (IL-2Rγnull), notably the NOD-scid IL-2Rγnull (NSG) and BALB/c-Rag2 null IL-2Rγnull (BRG) mice, has led to improved human cell engraftment. Despite their widespread use, a comprehensive characterisation of engraftment and GvHD development in the Hu-PBMC NSG and BRG models has never been performed in parallel. We compared engrafted human lymphocyte populations in the peripheral blood, spleens, lymph nodes and bone marrow of these mice. Kinetics of engraftment differed between the two strains, in particular a significantly faster expansion of the human CD45+ compartment and higher engraftment levels of CD3+ T-cells were observed in NSG mice, which may explain the faster rate of GvHD development in this model. The pathogenesis of human GvHD involves anti-host effector cell reactivity and cutaneous tissue infiltration. Despite this, the presence of T-cell subsets and tissue homing markers has only recently been characterised in the peripheral blood of patients and has never been properly defined in Hu-PBMC models of GvHD. Engrafted human cells in NSG mice shows a prevalence of tissue homing cells with a T-effector memory (TEM) phenotype and high levels of cutaneous lymphocyte antigen (CLA) expression. Characterization of Hu-PBMC mice provides a strong preclinical platform for the application of novel immunotherapies targeting TEM-cell driven GvHD. PMID:22937164
Randomizing growing networks with a time-respecting null model
NASA Astrophysics Data System (ADS)
Ren, Zhuo-Ming; Mariani, Manuel Sebastian; Zhang, Yi-Cheng; Medo, Matúš
2018-05-01
Complex networks are often used to represent systems that are not static but grow with time: People make new friendships, new papers are published and refer to the existing ones, and so forth. To assess the statistical significance of measurements made on such networks, we propose a randomization methodology—a time-respecting null model—that preserves both the network's degree sequence and the time evolution of individual nodes' degree values. By preserving the temporal linking patterns of the analyzed system, the proposed model is able to factor out the effect of the system's temporal patterns on its structure. We apply the model to the citation network of Physical Review scholarly papers and the citation network of US movies. The model reveals that the two data sets are strikingly different with respect to their degree-degree correlations, and we discuss the important implications of this finding on the information provided by paradigmatic node centrality metrics such as indegree and Google's PageRank. The randomization methodology proposed here can be used to assess the significance of any structural property in growing networks, which could bring new insights into the problems where null models play a critical role, such as the detection of communities and network motifs.
Blob dynamics in TORPEX poloidal null configurations
NASA Astrophysics Data System (ADS)
Shanahan, B. W.; Dudson, B. D.
2016-12-01
3D blob dynamics are simulated in X-point magnetic configurations in the TORPEX device via a non-field-aligned coordinate system, using an isothermal model which evolves density, vorticity, parallel velocity and parallel current density. By modifying the parallel gradient operator to include perpendicular perturbations from poloidal field coils, numerical singularities associated with field aligned coordinates are avoided. A comparison with a previously developed analytical model (Avino 2016 Phys. Rev. Lett. 116 105001) is performed and an agreement is found with minimal modification. Experimental comparison determines that the null region can cause an acceleration of filaments due to increasing connection length, but this acceleration is small relative to other effects, which we quantify. Experimental measurements (Avino 2016 Phys. Rev. Lett. 116 105001) are reproduced, and the dominant acceleration mechanism is identified as that of a developing dipole in a moving background. Contributions from increasing connection length close to the null point are a small correction.
Deep Broad-Band Infrared Nulling Using A Single-Mode Fiber Beam Combiner and Baseline Rotation
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Haguenauer, P.; Serabyn, E.; Liewer, K.
2006-01-01
The basic advantage of single-mode fibers for deep nulling applications resides in their spatial filtering ability, and has now long been known. However, and as suggested more recently, a single-mode fiber can also be used for direct coherent recombination of spatially separated beams, i.e. in a 'multi-axial' nulling scheme. After the first successful demonstration of deep (<2e-6) visible LASER nulls using this technique (Haguenauer & Serabyn, Applied Optics 2006), we decided to work on an infrared extension for ground based astronomical observations, e.g. using two or more off-axis sub-apertures of a large ground based telescope. In preparation for such a system, we built and tested a laboratory infrared fiber nuller working in a wavelength regime where atmospheric turbulence can be efficiently corrected, over a pass band (approx.1.5 to 1.8 micron) broad enough to provide reasonable sensitivity. In addition, since no snapshot images are readily accessible with a (single) fiber nuller, we also tested baseline rotation as an approach to detect off-axis companions while keeping a central null. This modulation technique is identical to the baseline rotation envisioned for the TPF-I space mission. Within this context, we report here on early laboratory results showing deep stable broad-band dual polarization infrared nulls <5e-4 (currently limited by detector noise), and visible LASER nulls better than 3e-4 over a 360 degree rotation of the baseline. While further work will take place in the laboratory to achieve deeper stable broad-band nulls and test off-axis sources detection through rotation, the emphasis will be put on bringing such a system to a telescope as soon as possible. Detection capability at the 500:1 contrast ratio in the K band (2.2 microns) seem readily accessible within 50-100 mas of the optical axis, even with a first generation system mounted on a >5m AO equipped telescope such as the Palomar Hale 200 inch, the Keck, Subaru or Gemini telescopes.
Collins, Carol M.; Ellis, Joseph A.
2017-01-01
ABSTRACT Mutations in the gene encoding emerin cause Emery–Dreifuss muscular dystrophy (EDMD). Emerin is an integral inner nuclear membrane protein and a component of the nuclear lamina. EDMD is characterized by skeletal muscle wasting, cardiac conduction defects and tendon contractures. The failure to regenerate skeletal muscle is predicted to contribute to the skeletal muscle pathology of EDMD. We hypothesize that muscle regeneration defects are caused by impaired muscle stem cell differentiation. Myogenic progenitors derived from emerin-null mice were used to confirm their impaired differentiation and analyze selected myogenic molecular pathways. Emerin-null progenitors were delayed in their cell cycle exit, had decreased myosin heavy chain (MyHC) expression and formed fewer myotubes. Emerin binds to and activates histone deacetylase 3 (HDAC3). Here, we show that theophylline, an HDAC3-specific activator, improved myotube formation in emerin-null cells. Addition of the HDAC3-specific inhibitor RGFP966 blocked myotube formation and MyHC expression in wild-type and emerin-null myogenic progenitors, but did not affect cell cycle exit. Downregulation of emerin was previously shown to affect the p38 MAPK and ERK/MAPK pathways in C2C12 myoblast differentiation. Using a pure population of myogenic progenitors completely lacking emerin expression, we show that these pathways are also disrupted. ERK inhibition improved MyHC expression in emerin-null cells, but failed to rescue myotube formation or cell cycle exit. Inhibition of p38 MAPK prevented differentiation in both wild-type and emerin-null progenitors. These results show that each of these molecular pathways specifically regulates a particular stage of myogenic differentiation in an emerin-dependent manner. Thus, pharmacological targeting of multiple pathways acting at specific differentiation stages may be a better therapeutic approach in the future to rescue muscle regeneration in vivo. PMID:28188262
Testing the TPF Interferometry Approach before Launch
NASA Technical Reports Server (NTRS)
Serabyn, Eugene; Mennesson, Bertrand
2006-01-01
One way to directly detect nearby extra-solar planets is via their thermal infrared emission, and with this goal in mind, both NASA and ESA are investigating cryogenic infrared interferometers. Common to both agencies' approaches to faint off-axis source detection near bright stars is the use of a rotating nulling interferometer, such as the Terrestrial Planet Finder interferometer (TPF-I), or Darwin. In this approach, the central star is nulled, while the emission from off-axis sources is transmitted and modulated by the rotation of the off-axis fringes. Because of the high contrasts involved, and the novelty of the measurement technique, it is essential to gain experience with this technique before launch. Here we describe a simple ground-based experiment that can test the essential aspects of the TPF signal measurement and image reconstruction approaches by generating a rotating interferometric baseline within the pupil of a large singleaperture telescope. This approach can mimic potential space-based interferometric configurations, and allow the extraction of signals from off-axis sources using the same algorithms proposed for the space-based missions. This approach should thus allow for testing of the applicability of proposed signal extraction algorithms for the detection of single and multiple near-neighbor companions...
Schlathölter, Ina; Jänsch, Melanie; Flachowsky, Henryk; Broggini, Giovanni Antonio Lodovico; Hanke, Magda-Viola; Patocchi, Andrea
2018-06-01
The approach presented here can be applied to reduce the time needed to introduce traits from wild apples into null segregant advanced selections by one-fourth. Interesting traits like resistances to pathogens are often found within the wild apple gene pool. However, the long juvenile phase of apple seedlings hampers the rapid introduction of these traits into new cultivars. The rapid crop cycle breeding approach used in this paper is based on the overexpression of the birch (Betula pendula) MADS4 transcription factor in apple. Using the early flowering line T1190 and 'Evereste' as source of the fire blight resistance (Fb_E locus), we successfully established 18 advanced selections of the fifth generation in the greenhouse within 7 years. Fifteen individuals showed the habitus expected of a regular apple seedling, while three showed very short internodes. The null segregants possessing a regular habitus maintained the high level of fire blight resistance typical for 'Evereste'. Using SSR markers, we estimated the percentage of genetic drag from 'Evereste' still associated with Fb_E on linkage group 12 (LG12). Eight out of the 18 selections had only 4% of 'Evereste' genome left. Since genotypes carrying the apple scab resistance gene Rvi6 and the fire blight resistance QTL Fb_F7 were used as parents in the course of the experiments, these resistances were also identified in some of the null segregants. One seedling is particularly interesting as, beside Fb_E, it also carries Fb_F7 heterozygously and Rvi6 homozygously. If null segregants obtained using this method will be considered as not genetically modified in Europe, as is already the case in the USA, this genotype could be a very promising parent for breeding new fire blight and scab-resistant apple cultivars in European apple breeding programs.
Estimating the Proportion of True Null Hypotheses Using the Pattern of Observed p-values
Tong, Tiejun; Feng, Zeny; Hilton, Julia S.; Zhao, Hongyu
2013-01-01
Estimating the proportion of true null hypotheses, π0, has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π0 by incorporating the distribution pattern of the observed p-values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p-values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1 − λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance. PMID:24078762
Existence and stability of circular orbits in static and axisymmetric spacetimes
NASA Astrophysics Data System (ADS)
Jia, Junji; Pang, Xiankai; Yang, Nan
2018-04-01
The existence and stability of timelike and null circular orbits (COs) in the equatorial plane of general static and axisymmetric (SAS) spacetime are investigated in this work. Using the fixed point approach, we first obtained a necessary and sufficient condition for the non-existence of timelike COs. It is then proven that there will always exist timelike COs at large ρ in an asymptotically flat SAS spacetime with a positive ADM mass and moreover, these timelike COs are stable. Some other sufficient conditions on the stability of timelike COs are also solved. We then found the necessary and sufficient condition on the existence of null COs. It is generally shown that the existence of timelike COs in SAS spacetime does not imply the existence of null COs, and vice-versa, regardless whether the spacetime is asymptotically flat or the ADM mass is positive or not. These results are then used to show the existence of timelike COs and their stability in an SAS Einstein-Yang-Mills-Dilaton spacetimes whose metric is not completely known. We also used the theorems to deduce the existence of timelike and null COs in some known SAS spacetimes.
Estimating the Proportion of True Null Hypotheses Using the Pattern of Observed p-values.
Tong, Tiejun; Feng, Zeny; Hilton, Julia S; Zhao, Hongyu
2013-01-01
Estimating the proportion of true null hypotheses, π 0 , has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π 0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π 0 by incorporating the distribution pattern of the observed p -values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p -values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1 - λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance.
Kunz, Alexander; Abe, Takato; Hochrainer, Karin; Shimamura, Munehisa; Anrather, Josef; Racchumi, Gianfranco; Zhou, Ping; Iadecola, Costantino
2008-02-13
CD36, a class-B scavenger receptor involved in multiple functions, including inflammatory signaling, may also contribute to ischemic brain injury through yet unidentified mechanisms. We investigated whether CD36 participates in the molecular events underlying the inflammatory reaction that accompanies cerebral ischemia and may contribute to the tissue damage. We found that activation of nuclear factor-kappaB, a transcription factor that coordinates postischemic gene expression, is attenuated in CD36-null mice subjected to middle cerebral artery occlusion. The infiltration of neutrophils and the glial reaction induced by cerebral ischemia were suppressed. Treatment with an inhibitor of inducible nitric oxide synthase, an enzyme that contributes to the tissue damage, reduced ischemic brain injury in wild-type mice, but not in CD36 nulls. In contrast to cerebral ischemia, the molecular and cellular inflammatory changes induced by intracerebroventricular injection of interleukin-1beta were not attenuated in CD36-null mice. The findings unveil a novel role of CD36 in early molecular events leading to nuclear factor-kappaB activation and postischemic inflammation. Inhibition of CD36 signaling may be a valuable therapeutic approach to counteract the deleterious effects of postischemic inflammation.
ERIC Educational Resources Information Center
LeMire, Steven D.
2010-01-01
This paper proposes an argument framework for the teaching of null hypothesis statistical testing and its application in support of research. Elements of the Toulmin (1958) model of argument are used to illustrate the use of p values and Type I and Type II error rates in support of claims about statistical parameters and subject matter research…
Tuckett, Andrea Z; Thornton, Raymond H; O'Reilly, Richard J; van den Brink, Marcel R M; Zakrzewski, Johannes L
2017-05-16
Even though hematopoietic stem cell transplantation can be curative in patients with severe combined immunodeficiency, there is a need for additional strategies boosting T cell immunity in individuals suffering from genetic disorders of lymphoid development. Here we show that image-guided intrathymic injection of hematopoietic stem and progenitor cells in NOD-scid IL2rγ null mice is feasible and facilitates the generation of functional T cells conferring protective immunity. Hematopoietic stem and progenitor cells were isolated from the bone marrow of healthy C57BL/6 mice (wild-type, Luciferase + , CD45.1 + ) and injected intravenously or intrathymically into both male and female, young or aged NOD-scid IL2rγ null recipients. The in vivo fate of injected cells was analyzed by bioluminescence imaging and flow cytometry of thymus- and spleen-derived T cell populations. In addition to T cell reconstitution, we evaluated mice for evidence of immune dysregulation based on diabetes development and graft-versus-host disease. T cell immunity following intrathymic injection of hematopoietic stem and progenitor cells in NOD-scid IL2rγ null mice was assessed in a B cell lymphoma model. Despite the small size of the thymic remnant in NOD-scid IL2rγ null mice, we were able to accomplish precise intrathymic delivery of hematopoietic stem and progenitor cells by ultrasound-guided injection. Thymic reconstitution following intrathymic injection of healthy allogeneic hematopoietic cells was most effective in young male recipients, indicating that even in the setting of severe immunodeficiency, sex and age are important variables for thymic function. Allogeneic T cells generated in intrathymically injected NOD-scid IL2rγ null mice displayed anti-lymphoma activity in vivo, but we found no evidence for severe auto/alloreactivity in T cell-producing NOD-scid IL2rγ null mice, suggesting that immune dysregulation is not a major concern. Our findings suggest that intrathymic injection of donor hematopoietic stem and progenitor cells is a safe and effective strategy to establish protective T cell immunity in a mouse model of severe combined immunodeficiency.
Orexin Receptor Antagonism Improves Sleep and Reduces Seizures in Kcna1-null Mice
Roundtree, Harrison M.; Simeone, Timothy A.; Johnson, Chaz; Matthews, Stephanie A.; Samson, Kaeli K.; Simeone, Kristina A.
2016-01-01
Study Objective: Comorbid sleep disorders occur in approximately one-third of people with epilepsy. Seizures and sleep disorders have an interdependent relationship where the occurrence of one can exacerbate the other. Orexin, a wake-promoting neuropeptide, is associated with sleep disorder symptoms. Here, we tested the hypothesis that orexin dysregulation plays a role in the comorbid sleep disorder symptoms in the Kcna1-null mouse model of temporal lobe epilepsy. Methods: Rest-activity was assessed using infrared beam actigraphy. Sleep architecture and seizures were assessed using continuous video-electroencephalography-electromyography recordings in Kcna1-null mice treated with vehicle or the dual orexin receptor antagonist, almorexant (100 mg/kg, intraperitoneally). Orexin levels in the lateral hypothalamus/perifornical region (LH/P) and hypothalamic pathology were assessed with immunohistochemistry and oxygen polarography. Results: Kcna1-null mice have increased latency to rapid eye movement (REM) sleep onset, sleep fragmentation, and number of wake epochs. The numbers of REM and non-REM (NREM) sleep epochs are significantly reduced in Kcna1-null mice. Severe seizures propagate to the wake-promoting LH/P where injury is apparent (indicated by astrogliosis, blood-brain barrier permeability, and impaired mitochondrial function). The number of orexin-positive neurons is increased in the LH/P compared to wild-type LH/P. Treatment with a dual orexin receptor antagonist significantly increases the number and duration of NREM sleep epochs and reduces the latency to REM sleep onset. Further, almorexant treatment reduces the incidence of severe seizures and overall seizure burden. Interestingly, we report a significant positive correlation between latency to REM onset and seizure burden in Kcna1-null mice. Conclusion: Dual orexin receptor antagonists may be an effective sleeping aid in epilepsy, and warrants further study on their somnogenic and ant-seizure effects in other epilepsy models. Citation: Roundtree HM, Simeone TA, Johnson C, Matthews SA, Samson KK, Simeone KA. Orexin receptor antagonism improves sleep and reduces seizures in Kcna1-null mice. SLEEP 2016;39(2):357–368. PMID:26446112
An empirical model to forecast solar wind velocity through statistical modeling
NASA Astrophysics Data System (ADS)
Gao, Y.; Ridley, A. J.
2013-12-01
The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.
NASA Astrophysics Data System (ADS)
Ryutov, D. D.; Soukhanovskii, V. A.
2015-11-01
The snowflake magnetic configuration is characterized by the presence of two closely spaced poloidal field nulls that create a characteristic hexagonal (reminiscent of a snowflake) separatrix structure. The magnetic field properties and the plasma behaviour in the snowflake are determined by the simultaneous action of both nulls, this generating a lot of interesting physics, as well as providing a chance for improving divertor performance. Among potential beneficial effects of this geometry are: increased volume of a low poloidal field around the null, increased connection length, and the heat flux sharing between multiple divertor channels. The authors summarise experimental results obtained with the snowflake configuration on several tokamaks. Wherever possible, relation to the existing theoretical models is described.
Spiegelhalter, D J; Freedman, L S
1986-01-01
The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.
Esteve-Altava, Borja; Rasskin-Gutman, Diego
2014-09-01
Craniofacial sutures and synchondroses form the boundaries among bones in the human skull, providing functional, developmental and evolutionary information. Bone articulations in the skull arise due to interactions between genetic regulatory mechanisms and epigenetic factors such as functional matrices (soft tissues and cranial cavities), which mediate bone growth. These matrices are largely acknowledged for their influence on shaping the bones of the skull; however, it is not fully understood to what extent functional matrices mediate the formation of bone articulations. Aiming to identify whether or not functional matrices are key developmental factors guiding the formation of bone articulations, we have built a network null model of the skull that simulates unconstrained bone growth. This null model predicts bone articulations that arise due to a process of bone growth that is uniform in rate, direction and timing. By comparing predicted articulations with the actual bone articulations of the human skull, we have identified which boundaries specifically need the presence of functional matrices for their formation. We show that functional matrices are necessary to connect facial bones, whereas an unconstrained bone growth is sufficient to connect non-facial bones. This finding challenges the role of the brain in the formation of boundaries between bones in the braincase without neglecting its effect on skull shape. Ultimately, our null model suggests where to look for modified developmental mechanisms promoting changes in bone growth patterns that could affect the development and evolution of the head skeleton. © 2014 Anatomical Society.
Quantifying lead-time bias in risk factor studies of cancer through simulation.
Jansen, Rick J; Alexander, Bruce H; Anderson, Kristin E; Church, Timothy R
2013-11-01
Lead-time is inherent in early detection and creates bias in observational studies of screening efficacy, but its potential to bias effect estimates in risk factor studies is not always recognized. We describe a form of this bias that conventional analyses cannot address and develop a model to quantify it. Surveillance Epidemiology and End Results (SEER) data form the basis for estimates of age-specific preclinical incidence, and log-normal distributions describe the preclinical duration distribution. Simulations assume a joint null hypothesis of no effect of either the risk factor or screening on the preclinical incidence of cancer, and then quantify the bias as the risk-factor odds ratio (OR) from this null study. This bias can be used as a factor to adjust observed OR in the actual study. For this particular study design, as average preclinical duration increased, the bias in the total-physical activity OR monotonically increased from 1% to 22% above the null, but the smoking OR monotonically decreased from 1% above the null to 5% below the null. The finding of nontrivial bias in fixed risk-factor effect estimates demonstrates the importance of quantitatively evaluating it in susceptible studies. Copyright © 2013 Elsevier Inc. All rights reserved.
Intra-fraction motion of the prostate is a random walk
NASA Astrophysics Data System (ADS)
Ballhausen, H.; Li, M.; Hegemann, N.-S.; Ganswindt, U.; Belka, C.
2015-01-01
A random walk model for intra-fraction motion has been proposed, where at each step the prostate moves a small amount from its current position in a random direction. Online tracking data from perineal ultrasound is used to validate or reject this model against alternatives. Intra-fraction motion of a prostate was recorded by 4D ultrasound (Elekta Clarity system) during 84 fractions of external beam radiotherapy of six patients. In total, the center of the prostate was tracked for 8 h in intervals of 4 s. Maximum likelihood model parameters were fitted to the data. The null hypothesis of a random walk was tested with the Dickey-Fuller test. The null hypothesis of stationarity was tested by the Kwiatkowski-Phillips-Schmidt-Shin test. The increase of variance in prostate position over time and the variability in motility between fractions were analyzed. Intra-fraction motion of the prostate was best described as a stochastic process with an auto-correlation coefficient of ρ = 0.92 ± 0.13. The random walk hypothesis (ρ = 1) could not be rejected (p = 0.27). The static noise hypothesis (ρ = 0) was rejected (p < 0.001). The Dickey-Fuller test rejected the null hypothesis ρ = 1 in 25% to 32% of cases. On average, the Kwiatkowski-Phillips-Schmidt-Shin test rejected the null hypothesis ρ = 0 with a probability of 93% to 96%. The variance in prostate position increased linearly over time (r2 = 0.9 ± 0.1). Variance kept increasing and did not settle at a maximum as would be expected from a stationary process. There was substantial variability in motility between fractions and patients with maximum aberrations from isocenter ranging from 0.5 mm to over 10 mm in one patient alone. In conclusion, evidence strongly suggests that intra-fraction motion of the prostate is a random walk and neither static (like inter-fraction setup errors) nor stationary (like a cyclic motion such as breathing, for example). The prostate tends to drift away from the isocenter during a fraction, and this variance increases with time, such that shorter fractions are beneficial to the problem of intra-fraction motion. As a consequence, fixed safety margins (which would over-compensate at the beginning and under-compensate at the end of a fraction) cannot optimally account for intra-fraction motion. Instead, online tracking and position correction on-the-fly should be considered as the preferred approach to counter intra-fraction motion.
Minimum spanning tree analysis of the human connectome
Sommer, Iris E.; Bohlken, Marc M.; Tewarie, Prejaas; Draaisma, Laurijn; Zalesky, Andrew; Di Biase, Maria; Brown, Jesse A.; Douw, Linda; Otte, Willem M.; Mandl, René C.W.; Stam, Cornelis J.
2018-01-01
Abstract One of the challenges of brain network analysis is to directly compare network organization between subjects, irrespective of the number or strength of connections. In this study, we used minimum spanning tree (MST; a unique, acyclic subnetwork with a fixed number of connections) analysis to characterize the human brain network to create an empirical reference network. Such a reference network could be used as a null model of connections that form the backbone structure of the human brain. We analyzed the MST in three diffusion‐weighted imaging datasets of healthy adults. The MST of the group mean connectivity matrix was used as the empirical null‐model. The MST of individual subjects matched this reference MST for a mean 58%–88% of connections, depending on the analysis pipeline. Hub nodes in the MST matched with previously reported locations of hub regions, including the so‐called rich club nodes (a subset of high‐degree, highly interconnected nodes). Although most brain network studies have focused primarily on cortical connections, cortical–subcortical connections were consistently present in the MST across subjects. Brain network efficiency was higher when these connections were included in the analysis, suggesting that these tracts may be utilized as the major neural communication routes. Finally, we confirmed that MST characteristics index the effects of brain aging. We conclude that the MST provides an elegant and straightforward approach to analyze structural brain networks, and to test network topological features of individual subjects in comparison to empirical null models. PMID:29468769
Yarotskyy, Viktor; Protasi, Feliciano; Dirksen, Robert T.
2013-01-01
Store-operated calcium entry (SOCE) channels play an important role in Ca2+ signaling. Recently, excessive SOCE was proposed to play a central role in the pathogenesis of malignant hyperthermia (MH), a pharmacogenic disorder of skeletal muscle. We tested this hypothesis by characterizing SOCE current (ISkCRAC) magnitude, voltage dependence, and rate of activation in myotubes derived from two mouse models of anesthetic- and heat-induced sudden death: 1) type 1 ryanodine receptor (RyR1) knock-in mice (Y524S/+) and 2) calsequestrin 1 and 2 double knock-out (dCasq-null) mice. ISkCRAC voltage dependence and magnitude at -80 mV were not significantly different in myotubes derived from wild type (WT), Y524S/+ and dCasq-null mice. However, the rate of ISkCRAC activation upon repetitive depolarization was significantly faster at room temperature in myotubes from Y524S/+ and dCasq-null mice. In addition, the maximum rate of ISkCRAC activation in dCasq-null myotubes was also faster than WT at more physiological temperatures (35-37°C). Azumolene (50 µM), a more water-soluble analog of dantrolene that is used to reverse MH crises, failed to alter ISkCRAC density or rate of activation. Together, these results indicate that while an increased rate of ISkCRAC activation is a common characteristic of myotubes derived from Y524S/+ and dCasq-null mice and that the protective effects of azumolene are not due to a direct inhibition of SOCE channels. PMID:24143248
Endemicity and evolutionary value: a study of Chilean endemic vascular plant genera
Scherson, Rosa A; Albornoz, Abraham A; Moreira-Muñoz, Andrés S; Urbina-Casanova, Rafael
2014-01-01
This study uses phylogeny-based measures of evolutionary potential (phylogenetic diversity and community structure) to evaluate the evolutionary value of vascular plant genera endemic to Chile. Endemicity is regarded as a very important consideration for conservation purposes. Taxa that are endemic to a single country are valuable conservation targets, as their protection depends upon a single government policy. This is especially relevant in developing countries in which conservation is not always a high resource allocation priority. Phylogeny-based measures of evolutionary potential such as phylogenetic diversity (PD) have been regarded as meaningful measures of the “value” of taxa and ecosystems, as they are able to account for the attributes that could allow taxa to recover from environmental changes. Chile is an area of remarkable endemism, harboring a flora that shows the highest number of endemic genera in South America. We studied PD and community structure of this flora using a previously available supertree at the genus level, to which we added DNA sequences of 53 genera endemic to Chile. Using discrepancy values and a null model approach, we decoupled PD from taxon richness, in order to compare their geographic distribution over a one-degree grid. An interesting pattern was observed in which areas to the southwest appear to harbor more PD than expected by their generic richness than those areas to the north of the country. In addition, some southern areas showed more PD than expected by chance, as calculated with the null model approach. Geological history as documented by the study of ancient floras as well as glacial refuges in the coastal range of southern Chile during the quaternary seem to be consistent with the observed pattern, highlighting the importance of this area for conservation purposes. PMID:24683462
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benny Klimek, Margaret E.; Aydogdu, Tufan; Link, Majik J.
2010-01-15
Cachexia, progressive loss of fat and muscle mass despite adequate nutrition, is a devastating complication of cancer associated with poor quality of life and increased mortality. Myostatin is a potent tonic muscle growth inhibitor. We tested how myostatin inhibition might influence cancer cachexia using genetic and pharmacological approaches. First, hypermuscular myostatin null mice were injected with Lewis lung carcinoma or B16F10 melanoma cells. Myostatin null mice were more sensitive to tumor-induced cachexia, losing more absolute mass and proportionately more muscle mass than wild-type mice. Because myostatin null mice lack expression from development, however, we also sought to manipulate myostatin acutely.more » The histone deacetylase inhibitor Trichostatin A has been shown to increase muscle mass in normal and dystrophic mice by inducing the myostatin inhibitor, follistatin. Although Trichostatin A administration induced muscle growth in normal mice, it failed to preserve muscle in colon-26 cancer cachexia. Finally we sought to inhibit myostatin and related ligands by administration of the Activin receptor extracellular domain/Fc fusion protein, ACVR2B-Fc. Systemic administration of ACVR2B-Fc potently inhibited muscle wasting and protected adipose stores in both colon-26 and Lewis lung carcinoma cachexia, without affecting tumor growth. Enhanced cachexia in myostatin knockouts indicates that host-derived myostatin is not the sole mediator of muscle wasting in cancer. More importantly, skeletal muscle preservation with ACVR2B-Fc establishes that targeting myostatin-family ligands using ACVR2B-Fc or related molecules is an important and potent therapeutic avenue in cancer cachexia.« less
Maranon, Rodrigo; Lima, Roberta; Spradley, Frank T; do Carmo, Jussara M; Zhang, Howei; Smith, Andrew D; Bui, Elizabeth; Thomas, R Lucas; Moulana, Mohadetheh; Hall, John E; Granger, Joey P; Reckelhoff, Jane F
2015-04-15
Women with polycystic ovary syndrome (PCOS) have hyperandrogenemia and increased prevalence of risk factors for cardiovascular disease, including elevated blood pressure. We recently characterized a hyperandrogenemic female rat (HAF) model of PCOS [chronic dihydrotestosterone (DHT) beginning at 4 wk of age] that exhibits similar characteristics as women with PCOS. In the present studies we tested the hypotheses that the elevated blood pressure in HAF rats is mediated in part by sympathetic activation, renal nerves, and melanocortin-4 receptor (MC4R) activation. Adrenergic blockade with terazosin and propranolol or renal denervation reduced mean arterial pressure (MAP by telemetry) in HAF rats but not controls. Hypothalamic MC4R expression was higher in HAF rats than controls, and central nervous system MC4R antagonism with SHU-9119 (1 nmol/h icv) reduced MAP in HAF rats. Taking a genetic approach, MC4R null and wild-type (WT) female rats were treated with DHT or placebo from 5 to 16 wk of age. MC4R null rats were obese and had higher MAP than WT control rats, and while DHT increased MAP in WT controls, DHT failed to further increase MAP in MC4R null rats. These data suggest that increases in MAP with chronic hyperandrogenemia in female rats are due, in part, to activation of the sympathetic nervous system, renal nerves, and MC4R and may provide novel insights into the mechanisms responsible for hypertension in women with hyperandrogenemia such as PCOS. Copyright © 2015 the American Physiological Society.
Benny Klimek, Margaret E; Aydogdu, Tufan; Link, Majik J; Pons, Marianne; Koniaris, Leonidas G; Zimmers, Teresa A
2010-01-15
Cachexia, progressive loss of fat and muscle mass despite adequate nutrition, is a devastating complication of cancer associated with poor quality of life and increased mortality. Myostatin is a potent tonic muscle growth inhibitor. We tested how myostatin inhibition might influence cancer cachexia using genetic and pharmacological approaches. First, hypermuscular myostatin null mice were injected with Lewis lung carcinoma or B16F10 melanoma cells. Myostatin null mice were more sensitive to tumor-induced cachexia, losing more absolute mass and proportionately more muscle mass than wild-type mice. Because myostatin null mice lack expression from development, however, we also sought to manipulate myostatin acutely. The histone deacetylase inhibitor Trichostatin A has been shown to increase muscle mass in normal and dystrophic mice by inducing the myostatin inhibitor, follistatin. Although Trichostatin A administration induced muscle growth in normal mice, it failed to preserve muscle in colon-26 cancer cachexia. Finally we sought to inhibit myostatin and related ligands by administration of the Activin receptor extracellular domain/Fc fusion protein, ACVR2B-Fc. Systemic administration of ACVR2B-Fc potently inhibited muscle wasting and protected adipose stores in both colon-26 and Lewis lung carcinoma cachexia, without affecting tumor growth. Enhanced cachexia in myostatin knockouts indicates that host-derived myostatin is not the sole mediator of muscle wasting in cancer. More importantly, skeletal muscle preservation with ACVR2B-Fc establishes that targeting myostatin-family ligands using ACVR2B-Fc or related molecules is an important and potent therapeutic avenue in cancer cachexia. Copyright 2009 Elsevier Inc. All rights reserved.
Evolution of the human immunodeficiency virus envelope gene is dominated by purifying selection.
Edwards, C T T; Holmes, E C; Pybus, O G; Wilson, D J; Viscidi, R P; Abrams, E J; Phillips, R E; Drummond, A J
2006-11-01
The evolution of the human immunodeficiency virus (HIV-1) during chronic infection involves the rapid, continuous turnover of genetic diversity. However, the role of natural selection, relative to random genetic drift, in governing this process is unclear. We tested a stochastic model of genetic drift using partial envelope sequences sampled longitudinally in 28 infected children. In each case the Bayesian posterior (empirical) distribution of coalescent genealogies was estimated using Markov chain Monte Carlo methods. Posterior predictive simulation was then used to generate a null distribution of genealogies assuming neutrality, with the null and empirical distributions compared using four genealogy-based summary statistics sensitive to nonneutral evolution. Because both null and empirical distributions were generated within a coalescent framework, we were able to explicitly account for the confounding influence of demography. From the distribution of corrected P-values across patients, we conclude that empirical genealogies are more asymmetric than expected if evolution is driven by mutation and genetic drift only, with an excess of low-frequency polymorphisms in the population. This indicates that although drift may still play an important role, natural selection has a strong influence on the evolution of HIV-1 envelope. A negative relationship between effective population size and substitution rate indicates that as the efficacy of selection increases, a smaller proportion of mutations approach fixation in the population. This suggests the presence of deleterious mutations. We therefore conclude that intrahost HIV-1 evolution in envelope is dominated by purifying selection against low-frequency deleterious mutations that do not reach fixation.
Numerical simulations of sheared magnetic lines at the solar null line
NASA Astrophysics Data System (ADS)
Kuźma, B.; Murawski, K.; Solov'ev, A.
2015-05-01
Aims: We perform numerical simulations of sheared magnetic lines at the magnetic null line configuration of two magnetic arcades that are settled in a gravitationally stratified and magnetically confined solar corona. Methods: We developed a general analytical model of a 2.5D solar atmospheric structure. As a particular application of this model, we adopted it for the curved magnetic field lines with an inverted Y shape that compose the null line above two magnetic arcades, which are embedded in the solar atmosphere that is specified by the realistic temperature distribution. The physical system is described by 2.5D magnetohydrodynamic equations that are numerically solved by the FLASH code. Results: The magnetic field line shearing, implemented about 200 km below the transition region, results in Alfvén and magnetoacoustic waves that are able to penetrate solar coronal regions above the magnetic null line. As a result of the coupling of these waves, partial reflection from the transition region and scattering from inhomogeneous regions the Alfvén waves experience fast attenuation on time scales comparable to their wave periods, and the physical system relaxes in time. The attenuation time grows with the large amplitude and characteristic growing time of the shearing. Conclusions: By having chosen a different magnetic flux function, the analytical model we devised can be adopted to derive equilibrium conditions for a diversity of 2.5D magnetic structures in the solar atmosphere. Movie associated to Fig. 5 is available in electronic form at http://www.aanda.org
Orexin Receptor Antagonism Improves Sleep and Reduces Seizures in Kcna1-null Mice.
Roundtree, Harrison M; Simeone, Timothy A; Johnson, Chaz; Matthews, Stephanie A; Samson, Kaeli K; Simeone, Kristina A
2016-02-01
Comorbid sleep disorders occur in approximately one-third of people with epilepsy. Seizures and sleep disorders have an interdependent relationship where the occurrence of one can exacerbate the other. Orexin, a wake-promoting neuropeptide, is associated with sleep disorder symptoms. Here, we tested the hypothesis that orexin dysregulation plays a role in the comorbid sleep disorder symptoms in the Kcna1-null mouse model of temporal lobe epilepsy. Rest-activity was assessed using infrared beam actigraphy. Sleep architecture and seizures were assessed using continuous video-electroencephalography-electromyography recordings in Kcna1-null mice treated with vehicle or the dual orexin receptor antagonist, almorexant (100 mg/kg, intraperitoneally). Orexin levels in the lateral hypothalamus/perifornical region (LH/P) and hypothalamic pathology were assessed with immunohistochemistry and oxygen polarography. Kcna1-null mice have increased latency to rapid eye movement (REM) sleep onset, sleep fragmentation, and number of wake epochs. The numbers of REM and non-REM (NREM) sleep epochs are significantly reduced in Kcna1-null mice. Severe seizures propagate to the wake-promoting LH/P where injury is apparent (indicated by astrogliosis, blood-brain barrier permeability, and impaired mitochondrial function). The number of orexin-positive neurons is increased in the LH/P compared to wild-type LH/P. Treatment with a dual orexin receptor antagonist significantly increases the number and duration of NREM sleep epochs and reduces the latency to REM sleep onset. Further, almorexant treatment reduces the incidence of severe seizures and overall seizure burden. Interestingly, we report a significant positive correlation between latency to REM onset and seizure burden in Kcna1-null mice. Dual orexin receptor antagonists may be an effective sleeping aid in epilepsy, and warrants further study on their somnogenic and ant-seizure effects in other epilepsy models. © 2016 Associated Professional Sleep Societies, LLC.
Measurement Via Optical Near-Nulling and Subaperture Stitching
NASA Technical Reports Server (NTRS)
Forbes, Greg; De Vries, Gary; Murphy, Paul; Brophy, Chris
2012-01-01
A subaperture stitching interferometer system provides near-nulling of a subaperture wavefront reflected from an object of interest over a portion of a surface of the object. A variable optical element located in the radiation path adjustably provides near-nulling to facilitate stitching of subaperture interferograms, creating an interferogram representative of the entire surface of interest. This enables testing of aspheric surfaces without null optics customized for each surface prescription. The surface shapes of objects such as lenses and other precision components are often measured with interferometry. However, interferometers have a limited capture range, and thus the test wavefront cannot be too different from the reference or the interference cannot be analyzed. Furthermore, the performance of the interferometer is usually best when the test and reference wavefronts are nearly identical (referred to as a null condition). Thus, it is necessary when performing such measurements to correct for known variations in shape to ensure that unintended variations are within the capture range of the interferometer and accurately measured. This invention is a system for nearnulling within a subaperture stitching interferometer, although in principle, the concept can be employed by wavefront measuring gauges other than interferometers. The system employs a light source for providing coherent radiation of a subaperture extent. An object of interest is placed to modify the radiation (e.g., to reflect or pass the radiation), and a variable optical element is located to interact with, and nearly null, the affected radiation. A detector or imaging device is situated to obtain interference patterns in the modified radiation. Multiple subaperture interferograms are taken and are stitched, or joined, to provide an interferogram representative of the entire surface of the object of interest. The primary aspect of the invention is the use of adjustable corrective optics in the context of subaperture stitching near-nulling interferometry, wherein a complex surface is analyzed via multiple, separate, overlapping interferograms. For complex surfaces, the problem of managing the identification and placement of corrective optics becomes even more pronounced, to the extent that in most cases the null corrector optics are specific to the particular asphere prescription and no others (i.e. another asphere requires completely different null correction optics). In principle, the near-nulling technique does not require subaperture stitching at all. Building a near-null system that is practically useful relies on two key features: simplicity and universality. If the system is too complex, it will be difficult to calibrate and model its manufacturing errors, rendering it useless as a precision metrology tool and/or prohibitively expensive. If the system is not applicable to a wide range of test parts, then it does not provide significant value over conventional null-correction technology. Subaperture stitching enables simpler and more universal near-null systems to be effective, because a fraction of a surface is necessarily less complex than the whole surface (excepting the extreme case of a fractal surface description). The technique of near-nulling can significantly enhance aspheric subaperture stitching capability by allowing the interferometer to capture a wider range of aspheres. More over, subaperture stitching is essential to a truly effective near-nulling system, since looking at a fraction of the surface keeps the wavefront complexity within the capability of a relatively simple nearnull apparatus. Furthermore, by reducing the subaperture size, the complexity of the measured wavefront can be reduced until it is within the capability of the near-null design.
Non-Deterministic Modelling of Food-Web Dynamics
Planque, Benjamin; Lindstrøm, Ulf; Subbey, Sam
2014-01-01
A novel approach to model food-web dynamics, based on a combination of chance (randomness) and necessity (system constraints), was presented by Mullon et al. in 2009. Based on simulations for the Benguela ecosystem, they concluded that observed patterns of ecosystem variability may simply result from basic structural constraints within which the ecosystem functions. To date, and despite the importance of these conclusions, this work has received little attention. The objective of the present paper is to replicate this original model and evaluate the conclusions that were derived from its simulations. For this purpose, we revisit the equations and input parameters that form the structure of the original model and implement a comparable simulation model. We restate the model principles and provide a detailed account of the model structure, equations, and parameters. Our model can reproduce several ecosystem dynamic patterns: pseudo-cycles, variation and volatility, diet, stock-recruitment relationships, and correlations between species biomass series. The original conclusions are supported to a large extent by the current replication of the model. Model parameterisation and computational aspects remain difficult and these need to be investigated further. Hopefully, the present contribution will make this approach available to a larger research community and will promote the use of non-deterministic-network-dynamics models as ‘null models of food-webs’ as originally advocated. PMID:25299245
Greenbury, Sam F.; Schaper, Steffen; Ahnert, Sebastian E.; Louis, Ard A.
2016-01-01
Mutational neighbourhoods in genotype-phenotype (GP) maps are widely believed to be more likely to share characteristics than expected from random chance. Such genetic correlations should strongly influence evolutionary dynamics. We explore and quantify these intuitions by comparing three GP maps—a model for RNA secondary structure, the HP model for protein tertiary structure, and the Polyomino model for protein quaternary structure—to a simple random null model that maintains the number of genotypes mapping to each phenotype, but assigns genotypes randomly. The mutational neighbourhood of a genotype in these GP maps is much more likely to contain genotypes mapping to the same phenotype than in the random null model. Such neutral correlations can be quantified by the robustness to mutations, which can be many orders of magnitude larger than that of the null model, and crucially, above the critical threshold for the formation of large neutral networks of mutationally connected genotypes which enhance the capacity for the exploration of phenotypic novelty. Thus neutral correlations increase evolvability. We also study non-neutral correlations: Compared to the null model, i) If a particular (non-neutral) phenotype is found once in the 1-mutation neighbourhood of a genotype, then the chance of finding that phenotype multiple times in this neighbourhood is larger than expected; ii) If two genotypes are connected by a single neutral mutation, then their respective non-neutral 1-mutation neighbourhoods are more likely to be similar; iii) If a genotype maps to a folding or self-assembling phenotype, then its non-neutral neighbours are less likely to be a potentially deleterious non-folding or non-assembling phenotype. Non-neutral correlations of type i) and ii) reduce the rate at which new phenotypes can be found by neutral exploration, and so may diminish evolvability, while non-neutral correlations of type iii) may instead facilitate evolutionary exploration and so increase evolvability. PMID:26937652
Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A
2015-01-15
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.
Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.
2014-01-01
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152
Role of CYP2B in Phenobarbital-Induced Hepatocyte Proliferation in Mice.
Li, Lei; Bao, Xiaochen; Zhang, Qing-Yu; Negishi, Masahiko; Ding, Xinxin
2017-08-01
Phenobarbital (PB) promotes liver tumorigenesis in rodents, in part through activation of the constitutive androstane receptor (CAR) and the consequent changes in hepatic gene expression and increases in hepatocyte proliferation. A typical effect of CAR activation by PB is a marked induction of Cyp2b10 expression in the liver; the latter has been suspected to be vital for PB-induced hepatocellular proliferation. This hypothesis was tested here by using a Cyp2a(4/5)bgs -null (null) mouse model in which all Cyp2b genes are deleted. Adult male and female wild-type (WT) and null mice were treated intraperitoneally with PB at 50 mg/kg once daily for 5 successive days and tested on day 6. The liver-to-body weight ratio, an indicator of liver hypertrophy, was increased by 47% in male WT mice, but by only 22% in male Cyp2a(4/5)bgs -null mice, by the PB treatment. The fractions of bromodeoxyuridine-positive hepatocyte nuclei, assessed as a measure of the rate of hepatocyte proliferation, were also significantly lower in PB-treated male null mice compared with PB-treated male WT mice. However, whereas few proliferating hepatocytes were detected in saline-treated mice, many proliferating hepatocytes were still detected in PB-treated male null mice. In contrast, female WT mice were much less sensitive than male WT mice to PB-induced hepatocyte proliferation, and PB-treated female WT and PB-treated female null mice did not show significant difference in rates of hepatocyte proliferation. These results indicate that CYP2B induction plays a significant, but partial, role in PB-induced hepatocyte proliferation in male mice. U.S. Government work not protected by U.S. copyright.
TBX6 Null Variants and a Common Hypomorphic Allele in Congenital Scoliosis
Wu, N.; Ming, X.; Xiao, J.; Wu, Z.; Chen, X.; Shinawi, M.; Shen, Y.; Yu, G.; Liu, J.; Xie, H.; Gucev, Z.S.; Liu, S.; Yang, N.; Al-Kateb, H.; Chen, J.; Zhang, Jian; Hauser, N.; Zhang, T.; Tasic, V.; Liu, P.; Su, X.; Pan, X.; Liu, C.; Wang, L.; Shen, Joseph; Shen, Jianxiong; Chen, Y.; Zhang, T.; Zhang, Jianguo; Choy, K.W.; Wang, Jun; Wang, Q.; Li, S.; Zhou, W.; Guo, J.; Wang, Y.; Zhang, C.; Zhao, H.; An, Y.; Zhao, Y.; Wang, Jiucun; Liu, Z.; Zuo, Y.; Tian, Y.; Weng, X.; Sutton, V.R.; Wang, H.; Ming, Y.; Kulkarni, S.; Zhong, T.P.; Giampietro, P.F.; Dunwoodie, S.L.; Cheung, S.W.; Zhang, X.; Jin, L.; Lupski, J.R.; Qiu, G.; Zhang, F.
2015-01-01
BACKGROUND Congenital scoliosis is a common type of vertebral malformation. Genetic susceptibility has been implicated in congenital scoliosis. METHODS We evaluated 161 Han Chinese persons with sporadic congenital scoliosis, 166 Han Chinese controls, and 2 pedigrees, family members of which had a 16p11.2 deletion, using comparative genomic hybridization, quantitative polymerase-chain-reaction analysis, and DNA sequencing. We carried out tests of replication using an additional series of 76 Han Chinese persons with congenital scoliosis and a multi-center series of 42 persons with 16p11.2 deletions. RESULTS We identified a total of 17 heterozygous TBX6 null mutations in the 161 persons with sporadic congenital scoliosis (11%); we did not observe any null mutations in TBX6 in 166 controls (P<3.8×10−6). These null alleles include copy-number variants (12 instances of a 16p11.2 deletion affecting TBX6) and single-nucleotide variants (1 nonsense and 4 frame-shift mutations). However, the discordant intrafamilial phenotypes of 16p11.2 deletion carriers suggest that heterozygous TBX6 null mutation is insufficient to cause congenital scoliosis. We went on to identify a common TBX6 haplotype as the second risk allele in all 17 carriers of TBX6 null mutations (P<1.1×10−6). Replication studies involving additional persons with congenital scoliosis who carried a deletion affecting TBX6 confirmed this compound inheritance model. In vitro functional assays suggested that the risk haplotype is a hypomorphic allele. Hemivertebrae are characteristic of TBX6-associated congenital scoliosis. CONCLUSIONS Compound inheritance of a rare null mutation and a hypomorphic allele of TBX6 accounted for up to 11% of congenital scoliosis cases in the series that we analyzed. PMID:25564734
2013-01-01
Background Metabolic alteration is one of the hallmarks of carcinogenesis. We aimed to identify certain metabolic biomarkers for the early detection of pancreatic cancer (PC) using the transgenic PTEN-null mouse model. Pancreas-specific deletion of PTEN in mouse caused progressive premalignant lesions such as highly proliferative ductal metaplasia. We imaged the mitochondrial redox state of the pancreases of the transgenic mice approximately eight months old using the redox scanner, i.e., the nicotinamide adenine dinucleotide/oxidized flavoproteins (NADH/Fp) fluorescence imager at low temperature. Two different approaches, the global averaging of the redox indices without considering tissue heterogeneity along tissue depth and the univariate analysis of multi-section data using tissue depth as a covariate were adopted for the statistical analysis of the multi-section imaging data. The standard deviations of the redox indices and the histogram analysis with Gaussian fit were used to determine the tissue heterogeneity. Results All methods show consistently that the PTEN deficient pancreases (Pdx1-Cre;PTENlox/lox) were significantly more heterogeneous in their mitochondrial redox state compared to the controls (PTENlox/lox). Statistical analysis taking into account the variations of the redox state with tissue depth further shows that PTEN deletion significantly shifted the pancreatic tissue to an overall more oxidized state. Oxidization of the PTEN-null group was not seen when the imaging data were analyzed by global averaging without considering the variation of the redox indices along tissue depth, indicating the importance of taking tissue heterogeneity into account for the statistical analysis of the multi-section imaging data. Conclusions This study reveals a possible link between the mitochondrial redox state alteration of the pancreas and its malignant transformation and may be further developed for establishing potential metabolic biomarkers for the early diagnosis of pancreatic cancer. PMID:24252270
Natural disease history of mouse models for limb girdle muscular dystrophy types 2D and 2F
Putker, K.; Tanganyika-de Winter, C. L.; Boertje-van der Meulen, J. W.; van Vliet, L.; Overzier, M.; Plomp, J. J.; Aartsma-Rus, A.; van Putten, M.
2017-01-01
Limb-girdle muscular dystrophy types 2D and 2F (LGMD 2D and 2F) are autosomal recessive disorders caused by mutations in the alpha- and delta sarcoglycan genes, respectively, leading to severe muscle weakness and degeneration. The cause of the disease has been well characterized and a number of animal models are available for pre-clinical studies to test potential therapeutic interventions. To facilitate transition from drug discovery to clinical trials, standardized procedures and natural disease history data were collected for these mouse models. Implementing the TREAD-NMD standardized operating procedures, we here subjected LGMD2D (SGCA-null), LGMD2F (SGCD-null) and wild type (C57BL/6J) mice to five functional tests from the age of 4 to 32 weeks. To assess whether the functional test regime interfered with disease pathology, sedentary groups were taken along. Muscle physiology testing of tibialis anterior muscle was performed at the age of 34 weeks. Muscle histopathology and gene expression was analysed in skeletal muscles and heart. Muscle histopathology and gene expression was analysed in skeletal muscles and heart. Mice successfully accomplished the functional tests, which did not interfere with disease pathology. Muscle function of SGCA- and SGCD-null mice was impaired and declined over time. Interestingly, female SGCD-null mice outperformed males in the two and four limb hanging tests, which proved the most suitable non-invasive tests to assess muscle function. Muscle physiology testing of tibialis anterior muscle revealed lower specific force and higher susceptibility to eccentric-induced damage in LGMD mice. Analyzing muscle histopathology and gene expression, we identified the diaphragm as the most affected muscle in LGMD strains. Cardiac fibrosis was found in SGCD-null mice, being more severe in males than in females. Our study offers a comprehensive natural history dataset which will be useful to design standardized tests and future pre-clinical studies in LGMD2D and 2F mice. PMID:28797108
Demonstrating Broadband Billion-to-One Contrast with the Visible Nulling Coronagraph
NASA Technical Reports Server (NTRS)
Hicks, Brian A.; Lyon, Richard G.; Petrone, Peter, III; Miller, Ian J.; Bolcar, Matthew R.; Clampin, Mark; Helmbrecht, Michael A.; Mallik, Udayan
2015-01-01
The key to broadband operation of the Visible Nulling Coronagraph (VNC) is achieving a condition of quasi- achromatic destructive interference between combined beams. Here we present efforts towards meeting this goal using Fresnel rhombs in each interferometric arm as orthogonally aligned half wave phase retarders. The milestone goal of the demonstration is to achieve 1 × 10-9 contrast at 2/D over a 40 nm bandpass centered at 633 nm. Rhombs have been designed and fabricated, and a multi-step approach to alignment using coarse positioners for each rhomb and pair has been developed to get within range of piezo stages used for fine positioning. The previously demonstrated narrowband VNC sensing and control approach that uses a segmented deformable mirror is being adapted to broadband to include fine positioning of the piezo-mounted rhombs, all demonstrated in a low-pressure environment.
Caffrey, James R; Hughes, Barry D; Britto, Joanne M; Landman, Kerry A
2014-01-01
The characteristic six-layered appearance of the neocortex arises from the correct positioning of pyramidal neurons during development and alterations in this process can cause intellectual disabilities and developmental delay. Malformations in cortical development arise when neurons either fail to migrate properly from the germinal zones or fail to cease migration in the correct laminar position within the cortical plate. The Reelin signalling pathway is vital for correct neuronal positioning as loss of Reelin leads to a partially inverted cortex. The precise biological function of Reelin remains controversial and debate surrounds its role as a chemoattractant or stop signal for migrating neurons. To investigate this further we developed an in silico agent-based model of cortical layer formation. Using this model we tested four biologically plausible hypotheses for neuron motility and four biologically plausible hypotheses for the loss of neuron motility (conversion from migration). A matrix of 16 combinations of motility and conversion rules was applied against the known structure of mouse cortical layers in the wild-type cortex, the Reelin-null mutant, the Dab1-null mutant and a conditional Dab1 mutant. Using this approach, many combinations of motility and conversion mechanisms can be rejected. For example, the model does not support Reelin acting as a repelling or as a stopping signal. In contrast, the study lends very strong support to the notion that the glycoprotein Reelin acts as a chemoattractant for neurons. Furthermore, the most viable proposition for the conversion mechanism is one in which conversion is affected by a motile neuron sensing in the near vicinity neurons that have already converted. Therefore, this model helps elucidate the function of Reelin during neuronal migration and cortical development.
Liu, Aiming; Krausz, Kristopher W; Fang, Zhong-Ze; Brocker, Chad; Qu, Aijuan; Gonzalez, Frank J
2014-04-01
Gemfibrozil, a ligand of peroxisome proliferator-activated receptor α (PPARα), is one of the most widely prescribed anti-dyslipidemia fibrate drugs. Among the adverse reactions observed with gemfibrozil are alterations in liver function, cholestatic jaundice, and cholelithiasis. However, the mechanisms underlying these toxicities are poorly understood. In this study, wild-type and Ppara-null mice were dosed with a gemfibrozil-containing diet for 14 days. Ultra-performance chromatography electrospray ionization quadrupole time-of-flight mass spectrometry-based metabolomics and traditional approaches were used to assess the mechanism of gemfibrozil-induced hepatotoxicity. Unsupervised multivariate data analysis revealed four lysophosphatidylcholine components in wild-type mice that varied more dramatically than those in Ppara-null mice. Targeted metabolomics revealed taurocholic acid and tauro-α-muricholic acid/tauro-β-muricholic acid were significantly increased in wild-type mice, but not in Ppara-null mice. In addition to the above perturbations in metabolite homeostasis, phenotypic alterations in the liver were identified. Hepatic genes involved in metabolism and transportation of lysophosphatidylcholine and bile acid compounds were differentially regulated between wild-type and Ppara-null mice, in agreement with the observed downstream metabolic alterations. These data suggest that PPARα mediates gemfibrozil-induced hepatotoxicity in part by disrupting phospholipid and bile acid homeostasis.
Low Cost Science Teaching Equipment for Visually Impaired Children
NASA Astrophysics Data System (ADS)
Gupta, H. O.; Singh, Rakshpal
1998-05-01
A low cost null detector an electronic thermometer and a colorimeter have been designed and developed for enabling visually impaired children (VIC) to do experiments in science that normally are accessible only to sighted children. The instruments are based on audio null detection in a balanced bridge and use a themistor for sensing the temperature and an LDR for color change. The analog output can be tactually read by VIC. The equipment has been tested for suitability with VIC. The approach followed in developing these equipment would be generally appropriate to a wide variety of science equipment for VIC by incorporating suitable sensors.
NASA Astrophysics Data System (ADS)
Ben Amara, Jamel; Bouzidi, Hedi
2018-01-01
In this paper, we consider a linear hybrid system which is composed by two non-homogeneous rods connected by a point mass with Dirichlet boundary conditions on the left end and a boundary control acts on the right end. We prove that this system is null controllable with Dirichlet or Neumann boundary controls. Our approach is mainly based on a detailed spectral analysis together with the moment method. In particular, we show that the associated spectral gap in both cases (Dirichlet or Neumann boundary controls) is positive without further conditions on the coefficients other than the regularities.
Eisenhart lifts and symmetries of time-dependent systems
NASA Astrophysics Data System (ADS)
Cariglia, M.; Duval, C.; Gibbons, G. W.; Horváthy, P. A.
2016-10-01
Certain dissipative systems, such as Caldirola and Kannai's damped simple harmonic oscillator, may be modelled by time-dependent Lagrangian and hence time dependent Hamiltonian systems with n degrees of freedom. In this paper we treat these systems, their projective and conformal symmetries as well as their quantisation from the point of view of the Eisenhart lift to a Bargmann spacetime in n + 2 dimensions, equipped with its covariantly constant null Killing vector field. Reparametrisation of the time variable corresponds to conformal rescalings of the Bargmann metric. We show how the Arnold map lifts to Bargmann spacetime. We contrast the greater generality of the Caldirola-Kannai approach with that of Arnold and Bateman. At the level of quantum mechanics, we are able to show how the relevant Schrödinger equation emerges naturally using the techniques of quantum field theory in curved spacetimes, since a covariantly constant null Killing vector field gives rise to well defined one particle Hilbert space. Time-dependent Lagrangians arise naturally also in cosmology and give rise to the phenomenon of Hubble friction. We provide an account of this for Friedmann-Lemaître and Bianchi cosmologies and how it fits in with our previous discussion in the non-relativistic limit.
Primordial power spectrum features and consequences
NASA Astrophysics Data System (ADS)
Goswami, G.
2014-03-01
The present Cosmic Microwave Background (CMB) temperature and polarization anisotropy data is consistent with not only a power law scalar primordial power spectrum (PPS) with a small running but also with the scalar PPS having very sharp features. This has motivated inflationary models with such sharp features. Recently, even the possibility of having nulls in the power spectrum (at certain scales) has been considered. The existence of these nulls has been shown in linear perturbation theory. What shall be the effect of higher order corrections on such nulls? Inspired by this question, we have attempted to calculate quantum radiative corrections to the Fourier transform of the 2-point function in a toy field theory and address the issue of how these corrections to the power spectrum behave in models in which the tree-level power spectrum has a sharp dip (but not a null). In particular, we have considered the possibility of the relative enhancement of radiative corrections in a model in which the tree-level spectrum goes through a dip in power at a certain scale. The mode functions of the field (whose power spectrum is to be evaluated) are chosen such that they undergo the kind of dynamics that leads to a sharp dip in the tree level power spectrum. Next, we have considered the situation in which this field has quartic self interactions, and found one loop correction in a suitably chosen renormalization scheme. Thus, we have attempted to answer the following key question in the context of this toy model (which is as important in the realistic case): In the chosen renormalization scheme, can quantum radiative corrections be enhanced relative to tree-level power spectrum at scales, at which sharp dips appear in the tree-level spectrum?
NASA Astrophysics Data System (ADS)
Miller, Kelsey; Guyon, Olivier
2016-07-01
This paper presents the early-stage simulation results of linear dark field control (LDFC) as a new approach to maintaining a stable dark hole within a stellar post-coronagraphic PSF. In practice, conventional speckle nulling is used to create a dark hole in the PSF, and LDFC is then employed to maintain the dark field by using information from the bright speckle field. The concept exploits the linear response of the bright speckle intensity to wavefront variations in the pupil, and therefore has many advantages over conventional speckle nulling as a method for stabilizing the dark hole. In theory, LDFC is faster, more sensitive, and more robust than using conventional speckle nulling techniques, like electric field conjugation, to maintain the dark hole. In this paper, LDFC theory, linear bright speckle characterization, and first results in simulation are presented as an initial step toward the deployment of LDFC on the UA Wavefront Control testbed in the coming year.
Telescopes in Near Space: Balloon Exoplanet Nulling Interferometer (BigBENI)
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Petrone, Peter; Mallik, Udayan; Mauk, Robin
2012-01-01
A significant and often overlooked path to advancing both science and technology for direct imaging and spectroscopic characterization of exosolar planets is to fly "near space" missions, i.e. balloon borne exosolar missions. A near space balloon mission with two or more telescopes, coherently combined, is capable of achieving a subset of the mission science goals of a single large space telescope at a small fraction of the cost. Additionally such an approach advances technologies toward flight readiness for space flight. Herein we discuss the feasibility of flying two 1.2 meter telescopes, with a baseline separation of 3.6 meters, operating in visible light, on a composite boom structure coupled to a modified visible nulling coronagraph operating to achieve an inner working angle of 60 milli-arcseconds. We discuss the potential science return, atmospheric residuals at 135,000 feet, pointing control and visible nulling and evaluate the state-or-art of these technologies with regards to balloon missions.
Accounting for heterogeneity in meta-analysis using a multiplicative model-an empirical study.
Mawdsley, David; Higgins, Julian P T; Sutton, Alex J; Abrams, Keith R
2017-03-01
In meta-analysis, the random-effects model is often used to account for heterogeneity. The model assumes that heterogeneity has an additive effect on the variance of effect sizes. An alternative model, which assumes multiplicative heterogeneity, has been little used in the medical statistics community, but is widely used by particle physicists. In this paper, we compare the two models using a random sample of 448 meta-analyses drawn from the Cochrane Database of Systematic Reviews. In general, differences in goodness of fit are modest. The multiplicative model tends to give results that are closer to the null, with a narrower confidence interval. Both approaches make different assumptions about the outcome of the meta-analysis. In our opinion, the selection of the more appropriate model will often be guided by whether the multiplicative model's assumption of a single effect size is plausible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Fine Guidance Sensing for Coronagraphic Observatories
NASA Technical Reports Server (NTRS)
Brugarolas, Paul; Alexander, James W.; Trauger, John T.; Moody, Dwight C.
2011-01-01
Three options have been developed for Fine Guidance Sensing (FGS) for coronagraphic observatories using a Fine Guidance Camera within a coronagraphic instrument. Coronagraphic observatories require very fine precision pointing in order to image faint objects at very small distances from a target star. The Fine Guidance Camera measures the direction to the target star. The first option, referred to as Spot, was to collect all of the light reflected from a coronagraph occulter onto a focal plane, producing an Airy-type point spread function (PSF). This would allow almost all of the starlight from the central star to be used for centroiding. The second approach, referred to as Punctured Disk, collects the light that bypasses a central obscuration, producing a PSF with a punctured central disk. The final approach, referred to as Lyot, collects light after passing through the occulter at the Lyot stop. The study includes generation of representative images for each option by the science team, followed by an engineering evaluation of a centroiding or a photometric algorithm for each option. After the alignment of the coronagraph to the fine guidance system, a "nulling" point on the FGS focal point is determined by calibration. This alignment is implemented by a fine alignment mechanism that is part of the fine guidance camera selection mirror. If the star images meet the modeling assumptions, and the star "centroid" can be driven to that nulling point, the contrast for the coronagraph will be maximized.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
Saffarini, Camelia M; Heger, Nicholas E; Yamasaki, Hideki; Liu, Tao; Hall, Susan J; Boekelheide, Kim
2012-01-01
Phthalate esters are commonly used plasticizers found in many household items, personal care products, and medical devices. Animal studies have shown that in utero exposure to di-(n-butyl) phthalate (DBP) within a critical window during gestation causes male reproductive tract abnormalities resembling testicular dysgenesis syndrome. Our studies utilized p53-deficient mice for their ability to display greater resistance to apoptosis during development. This model was chosen to determine whether multinucleated germ cells (MNG) induced by gestational DBP exposure could survive postnatally and evolve into testicular germ cell cancer. Pregnant dams were exposed to DBP (500 mg/kg/day) by oral gavage from gestational day 12 until birth. Perinatal effects were assessed on gestational day 19 and postnatal days 1, 4, 7, and 10 for the number of MNGs present in control and DBP-treated p53-heterozygous and null animals. As expected, DBP exposure induced MNGs, with greater numbers found in p53-null mice. Additionally, there was a time-dependent decrease in the incidence of MNGs during the early postnatal period. Histologic examination of adult mice exposed in utero to DBP revealed persistence of abnormal germ cells only in DBP-treated p53-null mice, not in p53-heterozygous or wild-type mice. Immunohistochemical staining of perinatal MNGs and adult abnormal germ cells was negative for both octamer-binding protein 3/4 and placental alkaline phosphatase. This unique model identified a role for p53 in the perinatal apoptosis of DBP-induced MNGs and provided insight into the long-term effects of gestational DBP exposure within a p53-null environment.
Tonnessen-Murray, Crystal; Ungerleider, Nathan A; Rao, Sonia G; Wasylishen, Amanda R; Frey, Wesley D; Jackson, James G
2018-05-28
p53 is a transcription factor that regulates expression of genes involved in cell cycle arrest, senescence, and apoptosis. TP53 harbors mutations that inactivate its transcriptional activity in roughly 30% of breast cancers, and these tumors are much more likely to undergo a pathological complete response to chemotherapy. Thus, the gene expression program activated by wild-type p53 contributes to a poor response. We used an in vivo genetic model system to comprehensively define the p53- and p21-dependent genes and pathways modulated in tumors following doxorubicin treatment. We identified genes differentially expressed in spontaneous mammary tumors harvested from treated MMTV-Wnt1 mice that respond poorly (Trp53+/+) or favorably (Trp53-null) and those that lack the critical senescence/arrest p53 target gene Cdkn1a. Trp53 wild-type tumors differentially expressed nearly 10-fold more genes than Trp53-null tumors after treatment. Pathway analyses showed that genes involved in cell cycle, senescence, and inflammation were enriched in treated Trp53 wild-type tumors; however, no genes/pathways were identified that adequately explain the superior cell death/tumor regression observed in Trp53-null tumors. Cdkn1a-null tumors that retained arrest capacity (responded poorly) and those that proliferated (responded well) after treatment had remarkably different gene regulation. For instance, Cdkn1a-null tumors that arrested upregulated Cdkn2a (p16), suggesting an alternative, p21-independent route to arrest. Live animal imaging of longitudinal gene expression of a senescence/inflammation gene reporter in Trp53+/+ tumors showed induction during and after chemotherapy treatment, while tumors were arrested, but expression rapidly diminished immediately upon relapse. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Gauging the cosmic acceleration with recent type Ia supernovae data sets
NASA Astrophysics Data System (ADS)
Velten, Hermano; Gomes, Syrios; Busti, Vinicius C.
2018-04-01
We revisit a model-independent estimator for cosmic acceleration based on type Ia supernovae distance measurements. This approach does not rely on any specific theory for gravity, energy content, nor parametrization for the scale factor or deceleration parameter and is based on falsifying the null hypothesis that the Universe never expanded in an accelerated way. By generating mock catalogs of known cosmologies, we test the robustness of this estimator, establishing its limits of applicability. We detail the pros and cons of such an approach. For example, we find that there are specific counterexamples in which the estimator wrongly provides evidence against acceleration in accelerating cosmologies. The dependence of the estimator on the H0 value is also discussed. Finally, we update the evidence for acceleration using the recent UNION2.1 and Joint Light-Curve Analysis samples. Contrary to recent claims, available data strongly favor an accelerated expansion of the Universe in complete agreement with the standard Λ CDM model.
Wacker, Michael J; Touchberry, Chad D; Silswal, Neerupma; Brotto, Leticia; Elmore, Chris J; Bonewald, Lynda F; Andresen, Jon; Brotto, Marco
2016-01-01
Autosomal recessive hypophosphatemic rickets (ARHR) is a heritable disorder characterized by hypophosphatemia, osteomalacia, and poor bone development. ARHR results from inactivating mutations in the DMP1 gene with the human phenotype being recapitulated in the Dmp1 null mouse model which displays elevated plasma fibroblast growth factor 23. While the bone phenotype has been well-characterized, it is not known what effects ARHR may also have on skeletal, cardiac, or vascular smooth muscle function, which is critical to understand in order to treat patients suffering from this condition. In this study, the extensor digitorum longus (EDL-fast-twitch muscle), soleus (SOL-slow-twitch muscle), heart, and aorta were removed from Dmp1 null mice and ex-vivo functional tests were simultaneously performed in collaboration by three different laboratories. Dmp1 null EDL and SOL muscles produced less force than wildtype muscles after normalization for physiological cross sectional area of the muscles. Both EDL and SOL muscles from Dmp1 null mice also produced less force after the addition of caffeine (which releases calcium from the sarcoplasmic reticulum) which may indicate problems in excitation contraction coupling in these mice. While the body weights of the Dmp1 null were smaller than wildtype, the heart weight to body weight ratio was higher. However, there were no differences in pathological hypertrophic gene expression compared to wildtype and maximal force of contraction was not different indicating that there may not be cardiac pathology under the tested conditions. We did observe a decrease in the rate of force development generated by cardiac muscle in the Dmp1 null which may be related to some of the deficits observed in skeletal muscle. There were no differences observed in aortic contractions induced by PGF2α or 5-HT or in endothelium-mediated acetylcholine-induced relaxations or endothelium-independent sodium nitroprusside-induced relaxations. In summary, these results indicate that there are deficiencies in both fast twitch and slow twitch muscle fiber type contractions in this model of ARHR, while there was less of a phenotype observed in cardiac muscle, and no differences observed in aortic function. These results may help explain skeletal muscle weakness reported by some patients with osteomalacia and need to be further investigated.
Wacker, Michael J.; Touchberry, Chad D.; Silswal, Neerupma; Brotto, Leticia; Elmore, Chris J.; Bonewald, Lynda F.; Andresen, Jon; Brotto, Marco
2016-01-01
Autosomal recessive hypophosphatemic rickets (ARHR) is a heritable disorder characterized by hypophosphatemia, osteomalacia, and poor bone development. ARHR results from inactivating mutations in the DMP1 gene with the human phenotype being recapitulated in the Dmp1 null mouse model which displays elevated plasma fibroblast growth factor 23. While the bone phenotype has been well-characterized, it is not known what effects ARHR may also have on skeletal, cardiac, or vascular smooth muscle function, which is critical to understand in order to treat patients suffering from this condition. In this study, the extensor digitorum longus (EDL-fast-twitch muscle), soleus (SOL–slow-twitch muscle), heart, and aorta were removed from Dmp1 null mice and ex-vivo functional tests were simultaneously performed in collaboration by three different laboratories. Dmp1 null EDL and SOL muscles produced less force than wildtype muscles after normalization for physiological cross sectional area of the muscles. Both EDL and SOL muscles from Dmp1 null mice also produced less force after the addition of caffeine (which releases calcium from the sarcoplasmic reticulum) which may indicate problems in excitation contraction coupling in these mice. While the body weights of the Dmp1 null were smaller than wildtype, the heart weight to body weight ratio was higher. However, there were no differences in pathological hypertrophic gene expression compared to wildtype and maximal force of contraction was not different indicating that there may not be cardiac pathology under the tested conditions. We did observe a decrease in the rate of force development generated by cardiac muscle in the Dmp1 null which may be related to some of the deficits observed in skeletal muscle. There were no differences observed in aortic contractions induced by PGF2α or 5-HT or in endothelium-mediated acetylcholine-induced relaxations or endothelium-independent sodium nitroprusside-induced relaxations. In summary, these results indicate that there are deficiencies in both fast twitch and slow twitch muscle fiber type contractions in this model of ARHR, while there was less of a phenotype observed in cardiac muscle, and no differences observed in aortic function. These results may help explain skeletal muscle weakness reported by some patients with osteomalacia and need to be further investigated. PMID:27242547
Rodrigue, Nicolas; Lartillot, Nicolas
2017-01-01
Codon substitution models have traditionally attempted to uncover signatures of adaptation within protein-coding genes by contrasting the rates of synonymous and non-synonymous substitutions. Another modeling approach, known as the mutation-selection framework, attempts to explicitly account for selective patterns at the amino acid level, with some approaches allowing for heterogeneity in these patterns across codon sites. Under such a model, substitutions at a given position occur at the neutral or nearly neutral rate when they are synonymous, or when they correspond to replacements between amino acids of similar fitness; substitutions from high to low (low to high) fitness amino acids have comparatively low (high) rates. Here, we study the use of such a mutation-selection framework as a null model for the detection of adaptation. Following previous works in this direction, we include a deviation parameter that has the effect of capturing the surplus, or deficit, in non-synonymous rates, relative to what would be expected under a mutation-selection modeling framework that includes a Dirichlet process approach to account for across-codon-site variation in amino acid fitness profiles. We use simulations, along with a few real data sets, to study the behavior of the approach, and find it to have good power with a low false-positive rate. Altogether, we emphasize the potential of recent mutation-selection models in the detection of adaptation, calling for further model refinements as well as large-scale applications. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Detection of long nulls in PSR B1706-16, a pulsar with large timing irregularities
NASA Astrophysics Data System (ADS)
Naidu, Arun; Joshi, Bhal Chandra; Manoharan, P. K.; Krishnakumar, M. A.
2018-04-01
Single pulse observations, characterizing in detail, the nulling behaviour of PSR B1706-16 are being reported for the first time in this paper. Our regular long duration monitoring of this pulsar reveals long nulls of 2-5 h with an overall nulling fraction of 31 ± 2 per cent. The pulsar shows two distinct phases of emission. It is usually in an active phase, characterized by pulsations interspersed with shorter nulls, with a nulling fraction of about 15 per cent, but it also rarely switches to an inactive phase, consisting of long nulls. The nulls in this pulsar are concurrent between 326.5 and 610 MHz. Profile mode changes accompanied by changes in fluctuation properties are seen in this pulsar, which switches from mode A before a null to mode B after the null. The distribution of null durations in this pulsar is bimodal. With its occasional long nulls, PSR B1706-16 joins the small group of intermediate nullers, which lie between the classical nullers and the intermittent pulsars. Similar to other intermediate nullers, PSR B1706-16 shows high timing noise, which could be due to its rare long nulls if one assumes that the slowdown rate during such nulls is different from that during the bursts.
On the importance of avoiding shortcuts in applying cognitive models to hierarchical data.
Boehm, Udo; Marsman, Maarten; Matzke, Dora; Wagenmakers, Eric-Jan
2018-06-12
Psychological experiments often yield data that are hierarchically structured. A number of popular shortcut strategies in cognitive modeling do not properly accommodate this structure and can result in biased conclusions. To gauge the severity of these biases, we conducted a simulation study for a two-group experiment. We first considered a modeling strategy that ignores the hierarchical data structure. In line with theoretical results, our simulations showed that Bayesian and frequentist methods that rely on this strategy are biased towards the null hypothesis. Secondly, we considered a modeling strategy that takes a two-step approach by first obtaining participant-level estimates from a hierarchical cognitive model and subsequently using these estimates in a follow-up statistical test. Methods that rely on this strategy are biased towards the alternative hypothesis. Only hierarchical models of the multilevel data lead to correct conclusions. Our results are particularly relevant for the use of hierarchical Bayesian parameter estimates in cognitive modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassuk, James; Lendvay, Thomas S.; Sweet, Robert
Diseases and conditions affecting the lower urinary tract are a leading cause of dysfunctional sexual health, incontinence, infection, and kidney failure. The growth, differentiation, and repair of the bladder's epithelial lining are regulated, in part, by fibroblast growth factor (FGF)-7 and -10 via a paracrine cascade originating in the mesenchyme (lamina propria) and targeting the receptor for FGF-7 and -10 within the transitional epithelium (urothelium). The FGF-7 gene is located at the 15q15-q21.1 locus on chromosome 15 and four exons generate a 3.852-kb mRNA. Five duplicated FGF-7 gene sequences that localized to chromosome 9 were predicted not to generate functionalmore » protein products, thus validating the use of FGF-7-null mice as an experimental model. Recombinant FGF-7 and -10 induced proliferation of human urothelial cells in vitro and transitional epithelium of wild-type and FGF-7-null mice in vivo.To determine the extent that induction of urothelial cell proliferation during the bladder response to injury is dependent on FGF-7, an animal model of partial bladder outlet obstruction was developed. Unbiased stereology was used to measure the percentage of proliferating urothelial cells between obstructed groups of wild-type and FGF-7-null mice. The stereological analysis indicated that a statistical significant difference did not exist between the two groups, suggesting that FGF-7 is not essential for urothelial cell proliferation in response to partial outlet obstruction. In contrast, a significant increase in FGF-10 expression was observed in the obstructed FGF-7-null group, indicating that the compensatory pathway that functions in this model results in urothelial repair.« less
Tucker, Kristal R.; Godbey, Steven J.; Thiebaud, Nicolas; Fadool, Debra Ann
2012-01-01
Physiological and nutritional state can modify sensory ability and perception through hormone signaling. Obesity and related metabolic disorders present a chronic imbalance in hormonal signaling that could impact sensory systems. In the olfactory system, external chemical cues are transduced into electrical signals to encode information. It is becoming evident that this system can also detect internal chemical cues in the form of molecules of energy homeostasis and endocrine hormones, whereby neurons of the olfactory system are modulated to change animal behavior towards olfactory cues. We hypothesized that chronic imbalance in hormonal signaling and energy homeostasis due to obesity would thereby disrupt olfactory behaviors in mice. To test this idea, we utilized three mouse models of varying body weight, metabolic hormones, and visceral adiposity – 1) C57BL6/J mice maintained on a condensed-milk based, moderately high-fat diet (MHF) of 32% fat for 6 months as the diet-induced obesity model, 2) an obesity-resistant, lean line of mice due to a gene-targeted deletion of a voltage-dependent potassium channel (Kv1.3-null), and 3) a genetic model of obesity as a result of a gene-targeted deletion of the melanocortin 4 receptor (MC4R-null). Diet-induced obese (DIO) mice failed to find fatty-scented hidden peanut butter cracker, based solely on olfactory cues, any faster than an unscented hidden marble, initially suggesting general anosmia. However, when these DIO mice were challenged to find a sweet-scented hidden chocolate candy, they had no difficulty. Furthermore, DIO mice were able to discriminate between fatty acids that differ by a single double bond and are components of the MHF diet (linoleic and oleic acid) in a habituation-dishabituation paradigm. Obesity-resistant, Kv1.3-null mice exhibited no change in scented object retrieval when placed on the MHF-diet, nor did they perform differently than wild-type mice in parallel habituation-dishabituation paradigms of fatty food-related odor components. Genetically obese, MC4R-null mice successfully found hidden scented objects, but did so more slowly than lean, wild-type mice, in an object-dependent fashion. In habituation-dishabituation trials of general odorants, MC4R-null mice failed to discriminate a novel odor, but were able to distinguish two fatty acids. Object memory recognition tests for short- and long-term memory retention demonstrated that maintenance on the MHF diet did not modify ability to perform these tasks independent of whether mice became obese or were resistant to weight gain (Kv1.3-null), however, the genetically predisposed obese mice (MC4R-null) failed the long-term object memory recognition performed at 24 hours. These results demonstrate that even though both the DIO mice and genetically predisposed obese mice are obese, they vary in the degree to which they exhibit behavioral deficits in odor detection, odor discrimination, and long-term memory. PMID:22995978
A Random Walk in the Park: An Individual-Based Null Model for Behavioral Thermoregulation.
Vickers, Mathew; Schwarzkopf, Lin
2016-04-01
Behavioral thermoregulators leverage environmental temperature to control their body temperature. Habitat thermal quality therefore dictates the difficulty and necessity of precise thermoregulation, and the quality of behavioral thermoregulation in turn impacts organism fitness via the thermal dependence of performance. Comparing the body temperature of a thermoregulator with a null (non-thermoregulating) model allows us to estimate habitat thermal quality and the effect of behavioral thermoregulation on body temperature. We define a null model for behavioral thermoregulation that is a random walk in a temporally and spatially explicit thermal landscape. Predicted body temperature is also integrated through time, so recent body temperature history, environmental temperature, and movement influence current body temperature; there is no particular reliance on an organism's equilibrium temperature. We develop a metric called thermal benefit that equates body temperature to thermally dependent performance as a proxy for fitness. We measure thermal quality of two distinct tropical habitats as a temporally dynamic distribution that is an ergodic property of many random walks, and we compare it with the thermal benefit of real lizards in both habitats. Our simple model focuses on transient body temperature; as such, using it we observe such subtleties as shifts in the thermoregulatory effort and investment of lizards throughout the day, from thermoregulators to thermoconformers.
Mathematical Capture of Human Data for Computer Model Building and Validation
2014-04-03
weapon. The Projectile, the VDE , and the IDE weapons had effects of financial loss for the targeted participant, while the MRAD yielded its own...for LE, Centroid and TE for the baseline and The VDE weapon conditions since p-values exceeded α. All other conditions rejected the null...hypothesis except the LE for VDE weapon. The K-S Statistics were correspondingly lower for the measures that failed to reject the null hypothesis. The CDF
Regional variation in the hierarchical partitioning of diversity in coral-dwelling fishes.
Belmaker, Jonathan; Ziv, Yaron; Shashar, Nadav; Connolly, Sean R
2008-10-01
The size of the regional species pool may influence local patterns of diversity. However, it is unclear whether certain spatial scales are less sensitive to regional influences than others. Additive partitioning was used to separate coral-dwelling fish diversity to its alpha and beta components, at multiple scales, in several regions across the Indo-Pacific. We then examined how the relative contribution of these components changes with increased regional diversity. By employing specific random-placement null models, we overcome methodological problems with local-regional regressions. We show that, although alpha and beta diversities within each region are consistently different from random-placement null models, the increase in beta diversities among regions was similar to that predicted once heterogeneity in coral habitat was accounted for. In contrast, alpha diversity within single coral heads was limited and increased less than predicted by the null models. This was correlated with increased intraspecific aggregation in more diverse regions and is consistent with ecological limitations on the number of coexisting species at the local scale. These results suggest that, apart from very small spatial scales, variation in the partitioning of fish diversity along regional species richness gradients is driven overwhelmingly by the corresponding gradients in coral assemblage structure.
A phenological mid-domain effect in flowering diversity.
Morales, Manuel A; Dodge, Gary J; Inouye, David W
2005-01-01
In this paper, we test the mid-domain hypothesis as an explanation for observed patterns of flowering diversity in two sub-alpine communities of insect-pollinated plants. Observed species richness patterns showed an early-season increase in richness, a mid-season peak, and a late-season decrease. We show that a "mid-domain" null model can qualitatively match this pattern of flowering species richness, with R(2) values typically greater than 60%. We find significant or marginally significant departures from expected patterns of diversity for only 3 out of 12 year-site combinations. On the other hand, we do find a consistent pattern of departure when comparing observed versus null-model predicted flowering diversity averaged across years. Our results therefore support the hypothesis that ecological factors shape patterns of flowering phenology, but that the strength or nature of these environmental forcings may differ between years or the two habitats we studied, or may depend on species-specific characteristics of these plant communities. We conclude that mid-domain null models provide an important baseline from which to test departure of expected patterns of flowering diversity across temporal domains. Geometric constraints should be included first in the list of factors that drive seasonal patterns of flowering diversity.
On the Interpretation and Use of Mediation: Multiple Perspectives on Mediation Analysis.
Agler, Robert; De Boeck, Paul
2017-01-01
Mediation analysis has become a very popular approach in psychology, and it is one that is associated with multiple perspectives that are often at odds, often implicitly. Explicitly discussing these perspectives and their motivations, advantages, and disadvantages can help to provide clarity to conversations and research regarding the use and refinement of mediation models. We discuss five such pairs of perspectives on mediation analysis, their associated advantages and disadvantages, and their implications: with vs. without a mediation hypothesis, specific effects vs. a global model, directness vs. indirectness of causation, effect size vs. null hypothesis testing, and hypothesized vs. alternative explanations. Discussion of the perspectives is facilitated by a small simulation study. Some philosophical and linguistic considerations are briefly discussed, as well as some other perspectives we do not develop here.
The Landscape of Somatic Chromosomal Copy Number Aberrations in GEM Models of Prostate Carcinoma
Bianchi-Frias, Daniella; Hernandez, Susana A.; Coleman, Roger; Wu, Hong; Nelson, Peter S.
2015-01-01
Human prostate cancer (PCa) is known to harbor recurrent genomic aberrations consisting of chromosomal losses, gains, rearrangements and mutations that involve oncogenes and tumor suppressors. Genetically engineered mouse (GEM) models have been constructed to assess the causal role of these putative oncogenic events and provide molecular insight into disease pathogenesis. While GEM models generally initiate neoplasia by manipulating a single gene, expression profiles of GEM tumors typically comprise hundreds of transcript alterations. It is unclear whether these transcriptional changes represent the pleiotropic effects of single oncogenes, and/or cooperating genomic or epigenomic events. Therefore, it was determined if structural chromosomal alterations occur in GEM models of PCa and whether the changes are concordant with human carcinomas. Whole genome array-based comparative genomic hybridization (CGH) was used to identify somatic chromosomal copy number aberrations (SCNAs) in the widely used TRAMP, Hi-Myc, Pten-null and LADY GEM models. Interestingly, very few SCNAs were identified and the genomic architecture of Hi-Myc, Pten-null and LADY tumors were essentially identical to the germline. TRAMP neuroendocrine carcinomas contained SCNAs, which comprised three recurrent aberrations including a single copy loss of chromosome 19 (encoding Pten). In contrast, cell lines derived from the TRAMP, Hi-Myc, and Pten-null tumors were notable for numerous SCNAs that included copy gains of chromosome 15 (encoding Myc) and losses of chromosome 11 (encoding p53). PMID:25298407
NASA Technical Reports Server (NTRS)
Hamer, H. A.; Johnson, K. G.
1986-01-01
An analysis was performed to determine the effects of model error on the control of a large flexible space antenna. Control was achieved by employing two three-axis control-moment gyros (CMG's) located on the antenna column. State variables were estimated by including an observer in the control loop that used attitude and attitude-rate sensors on the column. Errors were assumed to exist in the individual model parameters: modal frequency, modal damping, mode slope (control-influence coefficients), and moment of inertia. Their effects on control-system performance were analyzed either for (1) nulling initial disturbances in the rigid-body modes, or (2) nulling initial disturbances in the first three flexible modes. The study includes the effects on stability, time to null, and control requirements (defined as maximum torque and total momentum), as well as on the accuracy of obtaining initial estimates of the disturbances. The effects on the transients of the undisturbed modes are also included. The results, which are compared for decoupled and linear quadratic regulator (LQR) control procedures, are shown in tabular form, parametric plots, and as sample time histories of modal-amplitude and control responses. Results of the analysis showed that the effects of model errors on the control-system performance were generally comparable for both control procedures. The effect of mode-slope error was the most serious of all model errors.
Dhar-Mascareno, Manya; Rozenberg, Inna; Iqbal, Jahangir; Hussain, M Mahmood; Beckles, Daniel; Mascareno, Eduardo
2017-02-01
Hexim-1 is an inhibitor of RNA polymerase II transcription elongation. Decreased Hexim-1 expression in animal models of chronic diseases such as left ventricular hypertrophy, obesity and cancer triggered significant changes in adaptation and remodeling. The main aim of this study was to evaluate the role of Hexim1 in lipid metabolism focused in the progression of atherosclerosis and steatosis. We used the C57BL6 apolipoprotein E (ApoE null) crossed bred to C57BL6Hexim1 heterozygous mice to obtain ApoE null - Hexim1 heterozygous mice (ApoE-HT). Both ApoE null backgrounds were fed high fat diet for twelve weeks. Then, we evaluated lipid metabolism, atherosclerotic plaque formation and liver steatosis. In order to understand changes in the transcriptome of both backgrounds during the progression of steatosis, we performed Affymetrix mouse 430 2.0 microarray. After 12 weeks of HFD, ApoE null and ApoE-HT showed similar increase of cholesterol and triglycerides in plasma. Plaque composition was altered in ApoE-HT. Additionally, liver triglycerides and steatosis were decreased in ApoE-HT mice. Affymetrix analysis revealed that decreased steatosis might be due to impaired inducible SOCS3 expression in ApoE-HT mice. In conclusion, decreased Hexim-1 expression does not alter cholesterol metabolism in ApoE null background after HFD. However, it promotes stable atherosclerotic plaque and decreased steatosis by promoting the anti-inflammatory TGFβ pathway and blocking the expression of the inducible and pro-inflammatory expression of SOCS3 respectively. Published by Elsevier Ltd.
Cellular prion protein protects from inflammatory and neuropathic pain
2011-01-01
Cellular prion protein (PrPC) inhibits N-Methyl-D-Aspartate (NMDA) receptors. Since NMDA receptors play an important role in the transmission of pain signals in the dorsal horn of spinal cord, we thus wanted to determine if PrPC null mice show a reduced threshold for various pain behaviours. We compared nociceptive thresholds between wild type and PrPC null mice in models of inflammatory and neuropathic pain, in the presence and the absence of a NMDA receptor antagonist. 2-3 months old male PrPC null mice exhibited an MK-801 sensitive decrease in the paw withdrawal threshold in response both mechanical and thermal stimuli. PrPC null mice also exhibited significantly longer licking/biting time during both the first and second phases of formalin-induced inflammation of the paw, which was again prevented by treatment of the mice with MK-801, and responded more strongly to glutamate injection into the paw. Compared to wild type animals, PrPC null mice also exhibited a significantly greater nociceptive response (licking/biting) after intrathecal injection of NMDA. Sciatic nerve ligation resulted in MK-801 sensitive neuropathic pain in wild-type mice, but did not further augment the basal increase in pain behaviour observed in the null mice, suggesting that mice lacking PrPC may already be in a state of tonic central sensitization. Altogether, our data indicate that PrPC exerts a critical role in modulating nociceptive transmission at the spinal cord level, and fit with the concept of NMDA receptor hyperfunction in the absence of PrPC. PMID:21843375
Varadhan, Ravi; Wang, Sue-Jane
2016-01-01
Treatment effect heterogeneity is a well-recognized phenomenon in randomized controlled clinical trials. In this paper, we discuss subgroup analyses with prespecified subgroups of clinical or biological importance. We explore various alternatives to the naive (the traditional univariate) subgroup analyses to address the issues of multiplicity and confounding. Specifically, we consider a model-based Bayesian shrinkage (Bayes-DS) and a nonparametric, empirical Bayes shrinkage approach (Emp-Bayes) to temper the optimism of traditional univariate subgroup analyses; a standardization approach (standardization) that accounts for correlation between baseline covariates; and a model-based maximum likelihood estimation (MLE) approach. The Bayes-DS and Emp-Bayes methods model the variation in subgroup-specific treatment effect rather than testing the null hypothesis of no difference between subgroups. The standardization approach addresses the issue of confounding in subgroup analyses. The MLE approach is considered only for comparison in simulation studies as the “truth” since the data were generated from the same model. Using the characteristics of a hypothetical large outcome trial, we perform simulation studies and articulate the utilities and potential limitations of these estimators. Simulation results indicate that Bayes-DS and Emp-Bayes can protect against optimism present in the naïve approach. Due to its simplicity, the naïve approach should be the reference for reporting univariate subgroup-specific treatment effect estimates from exploratory subgroup analyses. Standardization, although it tends to have a larger variance, is suggested when it is important to address the confounding of univariate subgroup effects due to correlation between baseline covariates. The Bayes-DS approach is available as an R package (DSBayes). PMID:26485117
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach.
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-08-18
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach
NASA Astrophysics Data System (ADS)
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-08-01
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.
Landua, John D.; Bu, Wen; Wei, Wei; Li, Fuhai; Wong, Stephen T.C.; Dickinson, Mary E.; Rosen, Jeffrey M.; Lewis, Michael T.
2014-01-01
Cancer stem cells (CSCs, or tumor-initiating cells) may be responsible for tumor formation in many types of cancer, including breast cancer. Using high-resolution imaging techniques, we analyzed the relationship between a Wnt-responsive, CSC-enriched population and the tumor vasculature using p53-null mouse mammary tumors transduced with a lentiviral Wnt signaling reporter. Consistent with their localization in the normal mammary gland, Wnt-responsive cells in tumors were enriched in the basal/myoepithelial population and generally located in close proximity to blood vessels. The Wnt-responsive CSCs did not colocalize with the hypoxia-inducible factor 1α-positive cells in these p53-null basal-like tumors. Average vessel diameter and vessel tortuosity were increased in p53-null mouse tumors, as well as in a human tumor xenograft as compared with the normal mammary gland. The combined strategy of monitoring the fluorescently labeled CSCs and vasculature using high-resolution imaging techniques provides a unique opportunity to study the CSC and its surrounding vasculature. PMID:24797826
Ugarte-Gil, M F; Sánchez-Zúñiga, C; Gamboa-Cárdenas, R V; Aliaga-Zamudio, M; Zevallos, F; Tineo-Pozo, G; Cucho-Venegas, J M; Mosqueira-Riveros, A; Medina, M; Perich-Campos, R A; Alfaro-Lozano, J L; Rodriguez-Bellido, Z; Alarcón, G S; Pastor-Asurza, C A
2016-03-01
To determine whether circulating CD4+CD28null and extra-thymic CD4+CD8+ double positive (DP) T cells are independently associated with damage accrual in systemic lupus erythematosus (SLE) patients. This cross-sectional study was conducted between September 2013 and April 2014 in consecutive SLE patients from our Rheumatology Department. CD4+CD28null and CD4+CD8+ DP T-cell frequencies were analyzed by flow-cytometry. The association of damage (SLICC/ACR Damage Index, SDI) and CD4+CD28null and CD4+CD8+ DP T cells was examined by univariable and multivariable Poisson regression models, adjusting for possible confounders. All analyses were performed using SPSS 21.0. Patients' (n = 133) mean (SD) age at diagnosis was 35.5 (16.8) years, 124 (93.2%) were female; all were mestizo (mixed Caucasian and Amerindian ancestry). Disease duration was 7.4 (6.8) years. The SLE Disease Activity Index was 5.5 (4.2), and the SDI 0.9 (1.2). The percentages of CD4+CD28null and CD4+CD8+ DP T cells were 17.1 (14.4) and 0.4 (1.4), respectively. The percentage of CD4+CD28null and CD4+CD8+ DP T cells were positively associated with a higher SDI in both univariable (rate ratio (RR) 1.02, 95% confidence interval (CI): 1.01-1.03 and 1.17, 95% CI: 1.07-1.27, respectively; p < 0.001 for both) and multivariable analyses RR 1.02, 95% CI: 1.01-1.03, p = 0.001 for CD4+CD28null T cells and 1.28, 95% CI: 1.13-1.44, p < 0.001 for CD4+CD8+ DP T cells). Only the renal domain remained associated with CD4+CD28null in multivariable analyses (RR 1.023 (1.002-1.045); p = 0.034). In SLE patients, CD4+CD28null and CD4+CD8+ DP T cells are independently associated with disease damage. Longitudinal studies are warranted to determine the predictive value of these associations. © The Author(s) 2015.
Quantifying Oldowan Stone Tool Production at Olduvai Gorge, Tanzania
Reti, Jay S.
2016-01-01
Recent research suggests that variation exists among and between Oldowan stone tool assemblages. Oldowan variation might represent differential constraints on raw materials used to produce these stone implements. Alternatively, variation among Oldowan assemblages could represent different methods that Oldowan producing hominins utilized to produce these lithic implements. Identifying differential patterns of stone tool production within the Oldowan has implications for assessing how stone tool technology evolved, how traditions of lithic production might have been culturally transmitted, and for defining the timing and scope of these evolutionary events. At present there is no null model to predict what morphological variation in the Oldowan should look like. Without such a model, quantifying whether Oldowan assemblages vary due to raw material constraints or whether they vary due to differences in production technique is not possible. This research establishes a null model for Oldowan lithic artifact morphological variation. To establish these expectations this research 1) models the expected range of variation through large scale reduction experiments, 2) develops an algorithm to categorize archaeological flakes based on how they are produced, and 3) statistically assesses the methods of production behavior used by Oldowan producing hominins at the site of DK from Olduvai Gorge, Tanzania via the experimental model. Results indicate that a subset of quartzite flakes deviate from the null expectations in a manner that demonstrates efficiency in flake manufacture, while some basalt flakes deviate from null expectations in a manner that demonstrates inefficiency in flake manufacture. The simultaneous presence of efficiency in stone tool production for one raw material (quartzite) and inefficiency in stone tool production for another raw material (basalt) suggests that Oldowan producing hominins at DK were able to mediate the economic costs associated with stone tool procurement by utilizing high-cost materials more efficiently than is expected and low-cost materials in an inefficient manner. PMID:26808429
The SAMPL4 host-guest blind prediction challenge: an overview.
Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K
2014-04-01
Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.
System and Method for Null-Lens Wavefront Sensing
NASA Technical Reports Server (NTRS)
Hill, Peter C. (Inventor); Thompson, Patrick L. (Inventor); Aronstein, David L. (Inventor); Bolcar, Matthew R. (Inventor); Smith, Jeffrey S. (Inventor)
2015-01-01
A method of measuring aberrations in a null-lens including assembly and alignment aberrations. The null-lens may be used for measuring aberrations in an aspheric optic with the null-lens. Light propagates from the aspheric optic location through the null-lens, while sweeping a detector through the null-lens focal plane. Image data being is collected at locations about said focal plane. Light is simulated propagating to the collection locations for each collected image. Null-lens aberrations may extracted, e.g., applying image-based wavefront-sensing to collected images and simulation results. The null-lens aberrations improve accuracy in measuring aspheric optic aberrations.
Adaptive jammer nulling in EHF communications satellites
NASA Astrophysics Data System (ADS)
Bhagwan, Jai; Kavanagh, Stephen; Yen, J. L.
A preliminary investigation is reviewed concerning adaptive null steering multibeam uplink receiving system concepts for future extremely high frequency communications satellites. Primary alternatives in the design of the uplink antenna, the multibeam adaptive nulling receiver, and the processing algorithm and optimization criterion are discussed. The alternatives are phased array, lens or reflector antennas, nulling at radio frequency or an intermediate frequency, wideband versus narrowband nulling, and various adaptive nulling algorithms. A primary determinant of the hardware complexity is the receiving system architecture, which is described for the alternative antenna and nulling concepts. The final concept chosen will be influenced by the nulling performance requirements, cost, and technological readiness.
(2 + 1)-dimensional dynamical black holes in Einstein-nonlinear Maxwell theory
NASA Astrophysics Data System (ADS)
Gurtug, O.; Mazharimousavi, S. Habib; Halilsoy, M.
2018-02-01
Radiative extensions of BTZ metric in 2 + 1 dimensions are found which are sourced by nonlinear Maxwell fields and a null current. This may be considered as generalization of the problem formulated long go by Vaidya and Bonnor. The mass and charge are functions of retarded/advanced null coordinate apt for decay/inflation. The new solutions are constructed through a Theorem that works remarkably well for any nonlinear electrodynamic model. Hawking temperature is analyzed for the case of the Born-Infeld electrodynamics.
Yang, James J; Williams, L Keoki; Buu, Anne
2017-08-24
A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.
NASA Technical Reports Server (NTRS)
Beischer, D. E.
1971-01-01
Techniques for producing very low and zero magnetic fields are considered, giving attention to the compensation of the geomagnetic field by a Helmholtz coil system, approaches utilizing the shielding power of highly permeable alloys, and the complete exclusion of the geomagnetic field with the aid of a superconductive shield. Animal experiments in low magnetic fields are discussed, together with the exposure of man to 'null' magnetic fields and the Josephson junction as a possible biosensor of magnetic fields. It is found that neither the functions nor the behavior of man changes significantly during a two-week exposure to magnetic fields below 50 gammas.
Imaging issues for interferometry with CGH null correctors
NASA Astrophysics Data System (ADS)
Burge, James H.; Zhao, Chunyu; Zhou, Ping
2010-07-01
Aspheric surfaces, such as telescope mirrors, are commonly measured using interferometry with computer generated hologram (CGH) null correctors. The interferometers can be made with high precision and low noise, and CGHs can control wavefront errors to accuracy approaching 1 nm for difficult aspheric surfaces. However, such optical systems are typically poorly suited for high performance imaging. The aspheric surface must be viewed through a CGH that was intentionally designed to introduce many hundreds of waves of aberration. The imaging aberrations create difficulties for the measurements by coupling both geometric and diffraction effects into the measurement. These issues are explored here, and we show how the use of larger holograms can mitigate these effects.
A Model-Based Joint Identification of Differentially Expressed Genes and Phenotype-Associated Genes
Seo, Minseok; Shin, Su-kyung; Kwon, Eun-Young; Kim, Sung-Eun; Bae, Yun-Jung; Lee, Seungyeoun; Sung, Mi-Kyung; Choi, Myung-Sook; Park, Taesung
2016-01-01
Over the last decade, many analytical methods and tools have been developed for microarray data. The detection of differentially expressed genes (DEGs) among different treatment groups is often a primary purpose of microarray data analysis. In addition, association studies investigating the relationship between genes and a phenotype of interest such as survival time are also popular in microarray data analysis. Phenotype association analysis provides a list of phenotype-associated genes (PAGs). However, it is sometimes necessary to identify genes that are both DEGs and PAGs. We consider the joint identification of DEGs and PAGs in microarray data analyses. The first approach we used was a naïve approach that detects DEGs and PAGs separately and then identifies the genes in an intersection of the list of PAGs and DEGs. The second approach we considered was a hierarchical approach that detects DEGs first and then chooses PAGs from among the DEGs or vice versa. In this study, we propose a new model-based approach for the joint identification of DEGs and PAGs. Unlike the previous two-step approaches, the proposed method identifies genes simultaneously that are DEGs and PAGs. This method uses standard regression models but adopts different null hypothesis from ordinary regression models, which allows us to perform joint identification in one-step. The proposed model-based methods were evaluated using experimental data and simulation studies. The proposed methods were used to analyze a microarray experiment in which the main interest lies in detecting genes that are both DEGs and PAGs, where DEGs are identified between two diet groups and PAGs are associated with four phenotypes reflecting the expression of leptin, adiponectin, insulin-like growth factor 1, and insulin. Model-based approaches provided a larger number of genes, which are both DEGs and PAGs, than other methods. Simulation studies showed that they have more power than other methods. Through analysis of data from experimental microarrays and simulation studies, the proposed model-based approach was shown to provide a more powerful result than the naïve approach and the hierarchical approach. Since our approach is model-based, it is very flexible and can easily handle different types of covariates. PMID:26964035
Wu, Fenfen; Mi, Wentao; Fu, Yu; Struyk, Arie
2016-01-01
Over 60 mutations of SCN4A encoding the NaV1.4 sodium channel of skeletal muscle have been identified in patients with myotonia, periodic paralysis, myasthenia, or congenital myopathy. Most mutations are missense with gain-of-function defects that cause susceptibility to myotonia or periodic paralysis. Loss-of-function from enhanced inactivation or null alleles is rare and has been associated with myasthenia and congenital myopathy, while a mix of loss and gain of function changes has an uncertain relation to hypokalaemic periodic paralysis. To better define the functional consequences for a loss-of-function, we generated NaV1.4 null mice by deletion of exon 12. Heterozygous null mice have latent myasthenia and a right shift of the force-stimulus relation, without evidence of periodic paralysis. Sodium current density was half that of wild-type muscle and no compensation by retained expression of the foetal NaV1.5 isoform was detected. Mice null for NaV1.4 did not survive beyond the second postnatal day. This mouse model shows remarkable preservation of muscle function and viability for haploinsufficiency of NaV1.4, as has been reported in humans, with a propensity for pseudo-myasthenia caused by a marginal Na+ current density to support sustained high-frequency action potentials in muscle. PMID:27048647
Effects of a block in cysteine catabolism on energy balance and fat metabolism in mice
Niewiadomski, Julie; Zhou, James Q.; Roman, Heather B.; Liu, Xiaojing; Hirschberger, Lawrence L.; Locasale, Jason W.; Stipanuk, Martha H.
2016-01-01
To gain further insights into the effect of elevated cysteine levels on energy metabolism and the possible mechanisms by which cysteine may have these effects, we conducted studies in cysteine dioxygenase (Cdo1)–null mice. Cysteine dioxygenase (CDO) catalyzes the first step of the major pathway for cysteine catabolism. When CDO is absent, tissue and plasma cysteine levels are elevated, resulting in enhanced flux of cysteine through desulfhydration reactions. When Cdo1-null mice were fed a high-fat diet, they gained more weight than their wild-type controls, regardless of whether the diet was supplemented with taurine. Cdo1-null mice had markedly lower leptin levels, higher feed intakes, and markedly higher abundance of hepatic stearoyl-CoA desaturase 1 (SCD1) compared to wild-type control mice, and these differences were not affected by the fat or taurine content of the diet. Thus, reported associations of elevated cysteine levels with greater weight gain and with elevated hepatic Scd1 expression holds in the Cdo1-null mouse model. Hepatic accumulation of acylcarnitines suggested impaired mitochondrial β-oxidation of fatty acids in Cdo1-null mice. The strong associations of elevated cysteine levels with excess H2S production and impairments in energy metabolism suggest that H2S signaling could be involved. PMID:26995761
Effects of a block in cysteine catabolism on energy balance and fat metabolism in mice.
Niewiadomski, Julie; Zhou, James Q; Roman, Heather B; Liu, Xiaojing; Hirschberger, Lawrence L; Locasale, Jason W; Stipanuk, Martha H
2016-01-01
To gain further insights into the effects of elevated cysteine levels on energy metabolism and the possible mechanisms underlying these effects, we conducted studies in cysteine dioxygenase (Cdo1)-null mice. Cysteine dioxygenase (CDO) catalyzes the first step of the major pathway for cysteine catabolism. When CDO is absent, tissue and plasma cysteine levels are elevated, resulting in enhanced flux of cysteine through desulfhydration reactions. When Cdo1-null mice were fed a high-fat diet, they gained more weight than their wild-type controls, regardless of whether the diet was supplemented with taurine. Cdo1-null mice had markedly lower leptin levels, higher feed intakes, and markedly higher abundance of hepatic stearoyl-CoA desaturase 1 (SCD1) compared to wild-type control mice, and these differences were not affected by the fat or taurine content of the diet. Thus, reported associations of elevated cysteine levels with greater weight gain and with elevated hepatic Scd1 expression are also seen in the Cdo1-null mouse model. Hepatic accumulation of acylcarnitines suggests impaired mitochondrial β-oxidation of fatty acids in Cdo1-null mice. The strong associations of elevated cysteine levels with excess H2 S production and impairments in energy metabolism suggest that H2 S signaling could be involved. © 2016 New York Academy of Sciences.
Royston, Patrick; Parmar, Mahesh K B
2014-08-07
Most randomized controlled trials with a time-to-event outcome are designed and analysed under the proportional hazards assumption, with a target hazard ratio for the treatment effect in mind. However, the hazards may be non-proportional. We address how to design a trial under such conditions, and how to analyse the results. We propose to extend the usual approach, a logrank test, to also include the Grambsch-Therneau test of proportional hazards. We test the resulting composite null hypothesis using a joint test for the hazard ratio and for time-dependent behaviour of the hazard ratio. We compute the power and sample size for the logrank test under proportional hazards, and from that we compute the power of the joint test. For the estimation of relevant quantities from the trial data, various models could be used; we advocate adopting a pre-specified flexible parametric survival model that supports time-dependent behaviour of the hazard ratio. We present the mathematics for calculating the power and sample size for the joint test. We illustrate the methodology in real data from two randomized trials, one in ovarian cancer and the other in treating cellulitis. We show selected estimates and their uncertainty derived from the advocated flexible parametric model. We demonstrate in a small simulation study that when a treatment effect either increases or decreases over time, the joint test can outperform the logrank test in the presence of both patterns of non-proportional hazards. Those designing and analysing trials in the era of non-proportional hazards need to acknowledge that a more complex type of treatment effect is becoming more common. Our method for the design of the trial retains the tools familiar in the standard methodology based on the logrank test, and extends it to incorporate a joint test of the null hypothesis with power against non-proportional hazards. For the analysis of trial data, we propose the use of a pre-specified flexible parametric model that can represent a time-dependent hazard ratio if one is present.
Liu, Yan; Mo, Lan; Goldfarb, David S.; Evan, Andrew P.; Liang, Fengxia; Khan, Saeed R.; Lieske, John C.
2010-01-01
Mammalian urine contains a range of macromolecule proteins that play critical roles in renal stone formation, among which Tamm-Horsfall protein (THP) is by far the most abundant. While THP is a potent inhibitor of crystal aggregation in vitro and its ablation in vivo predisposes one of the two existing mouse models to spontaneous intrarenal calcium crystallization, key controversies remain regarding the role of THP in nephrolithiasis. By carrying out a long-range follow-up of more than 250 THP-null mice and their wild-type controls, we demonstrate here that renal calcification is a highly consistent phenotype of the THP-null mice that is age and partially gene dosage dependent, but is gender and genetic background independent. Renal calcification in THP-null mice is progressive, and by 15 mo over 85% of all the THP-null mice develop spontaneous intrarenal crystals. The crystals consist primarily of calcium phosphate in the form of hydroxyapatite, are located more frequently in the interstitial space of the renal papillae than intratubularly, particularly in older animals, and lack accompanying inflammatory cell infiltration. The interstitial deposits of hydroxyapatite observed in THP-null mice bear strong resemblances to the renal crystals found in human kidneys bearing idiopathic calcium oxalate stones. Compared with 24-h urine from the wild-type mice, that of THP-null mice is supersaturated with brushite (calcium phosphate), a stone precursor, and has reduced urinary excretion of citrate, a stone inhibitor. While less frequent than renal calcinosis, renal pelvic and ureteral stones and hydronephrosis occur in the aged THP-null mice. These results provide direct in vivo evidence indicating that normal THP plays an important role in defending the urinary system against calcification and suggest that reduced expression and/or decreased function of THP could contribute to nephrolithiasis. PMID:20591941
NASA Astrophysics Data System (ADS)
Wang, Hui; Wang, Mengyu; Baniasadi, Neda; Jin, Qingying; Elze, Tobias
2017-02-01
Purpose: To assess whether modeling of central vision loss (CVL) due to glaucoma by optical coherence tomography (OCT) retinal nerve fiber (RNF) layer thickness (RNFLT) can be improved by including the location of the major inferior temporal retinal artery (ITA), a known correlate of individual RNF geometry. Methods: Pat- tern deviations of the two locations of the Humphrey 24-2 visual field (VF) known to be specifically vulnerable to glaucomatous CVL and OCT RNFLT on the corresponding circumpapillary sector around the optic nerve head within the radius of 1.73mm were retrospectively selected from 428 eyes of 428 patients of a large clinical glaucoma service. ITA was marked on the 1.73mm circle by a trained observer. Linear regression models were fitted with CVL as dependent variable and VF mean deviation (MD) plus either of (1) RNFLT, (2) ITA, and (3) their combination, respectively, as regressors. To assess CVL over all levels of glaucoma severity, the three models were compared to a null model containing only MD. A Baysian model comparison was performed with the Bayes Factor (BF) as measure of strength of evidence (BF<3: no evidence, 3-20: positive evidence, >20: strong evidence over null model). Results: Neither RNFLT (BF=0.9) nor ITA (BF=1.4) alone provided positive evidence over the null model, but their combination resulted in a model with strong evidence (BF=21.4). Conclusion: While the established circumpapillary RNFLT sector, based on population statistics, could not satisfactorily model CVL, the inclusion of a retinal parameter related to individual eye anatomy yielded a strong structure-function model.
A NULL MODEL FOR THE EXPECTED MACROINVERTEBRATE ASSEMBLAGE IN STREAMS
Predictive models such as River InVertebrate Prediction And Classification System (RIVPACS) and AUStralian RIVer Assessment System (AUSRIVAS) model the natural variation across geographic regions in the occurrences of macroinvertebrate taxa in data from streams that are in refere...
Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús
2009-01-01
Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins. PMID:19181660
Tchetgen Tchetgen, Eric
2011-03-01
This article considers the detection and evaluation of genetic effects incorporating gene-environment interaction and independence. Whereas ordinary logistic regression cannot exploit the assumption of gene-environment independence, the proposed approach makes explicit use of the independence assumption to improve estimation efficiency. This method, which uses both cases and controls, fits a constrained retrospective regression in which the genetic variant plays the role of the response variable, and the disease indicator and the environmental exposure are the independent variables. The regression model constrains the association of the environmental exposure with the genetic variant among the controls to be null, thus explicitly encoding the gene-environment independence assumption, which yields substantial gain in accuracy in the evaluation of genetic effects. The proposed retrospective regression approach has several advantages. It is easy to implement with standard software, and it readily accounts for multiple environmental exposures of a polytomous or of a continuous nature, while easily incorporating extraneous covariates. Unlike the profile likelihood approach of Chatterjee and Carroll (Biometrika. 2005;92:399-418), the proposed method does not require a model for the association of a polytomous or continuous exposure with the disease outcome, and, therefore, it is agnostic to the functional form of such a model and completely robust to its possible misspecification.
Theoretical size distribution of fossil taxa: analysis of a null model
Reed, William J; Hughes, Barry D
2007-01-01
Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249
Light dark matter in the light of CRESST-II
Kopp, Joachim; Schwetz, Thomas; Zupan, Jure
2012-03-01
Recently the CRESST collaboration has published the long anticipated results of their direct Dark Matter (DM) detection experiment with a CaWO 4 target. The number of observed events exceeds known backgrounds at more than 4Σ significance, and this excess could potentially be due to DM scattering. We confront this interpretation with null results from other direct detection experiments for a number of theoretical models, and find that consistency is achieved in non-minimal models such as inelastic DM and isospin-violating DM. In both cases mild tension with constraints remain. The CRESST data can, however, not be reconciled with the null resultsmore » and with the positive signals from DAMA and CoGeNT simultaneously in any of the models we study.« less
Li, Libo; Bentler, Peter M
2011-06-01
MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of approximation (RMSEA) pairs. In this article, we develop a new method that quantifies those chosen RMSEA pairs and allows a quantitative comparison of them. Our method proposes the use of single RMSEA values to replace the choice of RMSEA pairs for model comparison and power analysis, thus avoiding the differential meaning of the chosen RMSEA pairs inherent in the approach of MacCallum et al. (2006). With this choice, the conventional cutoff values in model overall evaluation can directly be transferred and applied to the evaluation and power analysis of model differences. © 2011 American Psychological Association
A method distinguishing expressed vs. null mutations of the Col1A1 gene in osteogenesis imperfecta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redford-Badwal, D.A.; Stover, M.L.; McKinstry, M.
Osteogenesis imperfecta (OI) is a heterogeneous group of heritable disorders of bone characterized by increased susceptibility to fracture. Most of the causative mutations were identified in patients with the lethal form of the disease. Attention is now shifting to the milder forms of OI where glycine substitutions and null producing mutations have been found. Single amino acid substitutions can be identified by RT/PCR of total cellular RNA, but this approach does not work well for null mutations since the defective transcript does not accumulate in the cytoplasm. We have altered our RNA extraction method to separate RNA from the nuclearmore » and cytoplasmic compartments of cultured fibroblasts. Standard methods of mutation identification (RT/PCR followed by SSCP) is applied to each RNA fraction. DNA from an abnormal band on the SSCP gel is eluted and amplified by PCR for cloning and sequencing. Using this approach we have identified an Asp to Asn change in exon 50 (type II OI) and a Gly to Arg in exon 11 (type I OI) of the COL1A1 gene. These changes were found in both nuclear and cytoplasmic compartments. These putative mutations are currently being confirmed by protein studies. In contrast, three patients with mild OI associated with reduced {proportional_to}(I)mRNA, had distinguishing SSCP bands present in the nuclear but not the cytoplasmic compartment. In one case a frame shift mutation was observed, while the other two revealed polymorphisms. The compartmentalization of the mutant allele has directed us to look elsewhere in the transcript for the causative mutation. This approach to mutation identification is capable of distinguishing these fundamentally different types of mutations and allows for preferential cloning and sequencing of the abnormal allele.« less
A nonparametric significance test for sampled networks.
Elliott, Andrew; Leicht, Elizabeth; Whitmore, Alan; Reinert, Gesine; Reed-Tsochas, Felix
2018-01-01
Our work is motivated by an interest in constructing a protein-protein interaction network that captures key features associated with Parkinson's disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation. We provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. The proposed method uses a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by constructing a minimal seed list as the starting point for the significance test. The null model is based on random seed lists of the same length as a minimum seed list that generates the subnetwork; in this random seed list the nodes have (approximately) the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which deviate significantly from random on an appropriate set of statistics and might capture useful information for a real world protein-protein interaction network. The software used in this paper are available for download at https://sites.google.com/site/elliottande/. The software is written in Python and uses the NetworkX library. ande.elliott@gmail.com or felix.reed-tsochas@sbs.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
Detecting Multifractal Properties in Asset Returns:
NASA Astrophysics Data System (ADS)
Lux, Thomas
It has become popular recently to apply the multifractal formalism of statistical physics (scaling analysis of structure functions and f(α) singularity spectrum analysis) to financial data. The outcome of such studies is a nonlinear shape of the structure function and a nontrivial behavior of the spectrum. Eventually, this literature has moved from basic data analysis to estimation of particular variants of multifractal models for asset returns via fitting of the empirical τ(q) and f(α) functions. Here, we reinvestigate earlier claims of multifractality using four long time series of important financial markets. Taking the recently proposed multifractal models of asset returns as our starting point, we show that the typical "scaling estimators" used in the physics literature are unable to distinguish between spurious and "true" multiscaling of financial data. Designing explicit tests for multiscaling, we can in no case reject the null hypothesis that the apparent curvature of both the scaling function and the Hölder spectrum are spuriously generated by the particular fat-tailed distribution of financial data. Given the well-known overwhelming evidence in favor of different degrees of long-term dependence in the powers of returns, we interpret this inability to reject the null hypothesis of multiscaling as a lack of discriminatory power of the standard approach rather than as a true rejection of multiscaling. However, the complete "failure" of the multifractal apparatus in this setting also raises the question whether results in other areas (like geophysics) suffer from similar shortcomings of the traditional methodology.
Isabwe, Alain; Yang, Jun R; Wang, Yongming; Liu, Lemian; Chen, Huihuang; Yang, Jun
2018-07-15
Although the influence of microbial community assembly processes on aquatic ecosystem function and biodiversity is well known, the processes that govern planktonic communities in human-impacted rivers remain largely unstudied. Here, we used multivariate statistics and a null model approach to test the hypothesis that environmental conditions and obstructed dispersal opportunities, dictate a deterministic community assembly for phytoplankton and bacterioplankton across contrasting hydrographic conditions in a subtropical mid-sized river (Jiulong River, southeast China). Variation partitioning analysis showed that the explanatory power of local environmental variables was larger than that of the spatial variables for both plankton communities during the dry season. During the wet season, phytoplankton community variation was mainly explained by local environmental variables, whereas the variance in bacterioplankton was explained by both environmental and spatial predictors. The null model based on Raup-Crick coefficients for both planktonic groups suggested little evidences of the stochastic processes involving dispersal and random distribution. Our results showed that hydrological change and landscape structure act together to cause divergence in communities along the river channel, thereby dictating a deterministic assembly and that selection exceeds dispersal limitation during the dry season. Therefore, to protect the ecological integrity of human-impacted rivers, watershed managers should not only consider local environmental conditions but also dispersal routes to account for the effect of regional species pool on local communities. Copyright © 2018 Elsevier B.V. All rights reserved.
A nonparametric significance test for sampled networks
Leicht, Elizabeth; Whitmore, Alan; Reinert, Gesine; Reed-Tsochas, Felix
2018-01-01
Abstract Motivation Our work is motivated by an interest in constructing a protein–protein interaction network that captures key features associated with Parkinson’s disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation. Results We provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. The proposed method uses a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by constructing a minimal seed list as the starting point for the significance test. The null model is based on random seed lists of the same length as a minimum seed list that generates the subnetwork; in this random seed list the nodes have (approximately) the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which deviate significantly from random on an appropriate set of statistics and might capture useful information for a real world protein–protein interaction network. Availability and implementation The software used in this paper are available for download at https://sites.google.com/site/elliottande/. The software is written in Python and uses the NetworkX library. Contact ande.elliott@gmail.com or felix.reed-tsochas@sbs.ox.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29036452
Energy emission from a high curvature region and its backreaction
NASA Astrophysics Data System (ADS)
Kokubu, Takafumi; Jhingan, Sanjay; Harada, Tomohiro
2018-05-01
A strong gravity naked singular region can give important clues toward understanding the classical as well as spontaneous nature of General Relativity. We propose here a model for energy emission from a naked singular region in a self-similar dust spacetime by gluing two self-similar dust solutions at the Cauchy horizon. The energy is defined and evaluated as a surface energy of a null hypersurface, the null shell. Also included are scenarios of the spontaneous creation or disappearance of a singularity, the end of inflation, black hole formation, and bubble nucleation. Our examples investigated here explicitly show that one can model unlimitedly luminous and energetic objects in the framework of General Relativity.
Ding, Shengli; Blue, Randal E.; Morgan, Douglas R.; Lund, Pauline K.
2015-01-01
Background Activatable near-infrared fluorescent (NIRF) probes have been used for ex vivo and in vivo detection of intestinal tumors in animal models. We hypothesized that NIRF probes activatable by cathepsins or MMPs will detect and quantify dextran sulphate sodium (DSS) induced acute colonic inflammation in wild type (WT) mice or chronic colitis in IL-10 null mice ex vivo or in vivo. Methods WT mice given DSS, water controls and IL-10 null mice with chronic colitis were administered probes by retro-orbital injection. FMT2500 LX system imaged fresh and fixed intestine ex vivo and mice in vivo. Inflammation detected by probes was verified by histology and colitis scoring. NIRF signal intensity was quantified using 2D region of interest (ROI) ex vivo or 3D ROI-analysis in vivo. Results Ex vivo, seven probes tested yielded significant higher NIRF signals in colon of DSS treated mice versus controls. A subset of probes was tested in IL-10 null mice and yielded strong ex vivo signals. Ex vivo fluorescence signal with 680 series probes was preserved after formalin fixation. In DSS and IL-10 null models, ex vivo NIRF signal strongly and significantly correlated with colitis scores. In vivo, ProSense680, CatK680FAST and MMPsense680 yielded significantly higher NIRF signals in DSS treated mice than controls but background was high in controls. Conclusion Both cathepsin or MMP-activated NIRF-probes can detect and quantify colonic inflammation ex vivo. ProSense680 yielded the strongest signals in DSS colitis ex vivo and in vivo, but background remains a problem for in vivo quantification of colitis. PMID:24374874
Siller, Saul S.; Broadie, Kendal
2011-01-01
SUMMARY Fragile X syndrome (FXS), caused by loss of the fragile X mental retardation 1 (FMR1) product (FMRP), is the most common cause of inherited intellectual disability and autism spectrum disorders. FXS patients suffer multiple behavioral symptoms, including hyperactivity, disrupted circadian cycles, and learning and memory deficits. Recently, a study in the mouse FXS model showed that the tetracycline derivative minocycline effectively remediates the disease state via a proposed matrix metalloproteinase (MMP) inhibition mechanism. Here, we use the well-characterized Drosophila FXS model to assess the effects of minocycline treatment on multiple neural circuit morphological defects and to investigate the MMP hypothesis. We first treat Drosophila Fmr1 (dfmr1) null animals with minocycline to assay the effects on mutant synaptic architecture in three disparate locations: the neuromuscular junction (NMJ), clock neurons in the circadian activity circuit and Kenyon cells in the mushroom body learning and memory center. We find that minocycline effectively restores normal synaptic structure in all three circuits, promising therapeutic potential for FXS treatment. We next tested the MMP hypothesis by assaying the effects of overexpressing the sole Drosophila tissue inhibitor of MMP (TIMP) in dfmr1 null mutants. We find that TIMP overexpression effectively prevents defects in the NMJ synaptic architecture in dfmr1 mutants. Moreover, co-removal of dfmr1 similarly rescues TIMP overexpression phenotypes, including cellular tracheal defects and lethality. To further test the MMP hypothesis, we generated dfmr1;mmp1 double null mutants. Null mmp1 mutants are 100% lethal and display cellular tracheal defects, but co-removal of dfmr1 allows adult viability and prevents tracheal defects. Conversely, co-removal of mmp1 ameliorates the NMJ synaptic architecture defects in dfmr1 null mutants, despite the lack of detectable difference in MMP1 expression or gelatinase activity between the single dfmr1 mutants and controls. These results support minocycline as a promising potential FXS treatment and suggest that it might act via MMP inhibition. We conclude that FMRP and TIMP pathways interact in a reciprocal, bidirectional manner. PMID:21669931
Discoveries far from the lamppost with matrix elements and ranking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Debnath, Dipsikha; Gainer, James S.; Matchev, Konstantin T.
2015-04-01
The prevalence of null results in searches for new physics at the LHC motivates the effort to make these searches as model-independent as possible. We describe procedures for adapting the Matrix Element Method for situations where the signal hypothesis is not known a priori. We also present general and intuitive approaches for performing analyses and presenting results, which involve the flattening of background distributions using likelihood information. The first flattening method involves ranking events by background matrix element, the second involves quantile binning with respect to likelihood (and other) variables, and the third method involves reweighting histograms by the inversemore » of the background distribution.« less
[Programmed mouse genome modifications].
Babinet, C
1998-02-01
The availability, in the mouse, of embryonic stem cells (ES cells) which have the ability to colonize the germ line of a developing embryo, has opened entirely new avenues to the genetic approach of embryonic development, physiology and pathology of this animal. Indeed, it is now possible, using homologous recombination in ES cells, to introduce mutations in any gene as long as it has been cloned. Thus, null as well as more subtle mutations can be created. Furthermore, scenarios are currently being derived which will allow one to generate conditional mutations. Taken together, these methods offer a tremendous tool to study gene function in vivo; they also open the way to creating murine models of human genetic diseases.
Parsons, Brendon A; Marney, Luke C; Siegler, W Christopher; Hoggard, Jamin C; Wright, Bob W; Synovec, Robert E
2015-04-07
Comprehensive two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC-TOFMS) is a versatile instrumental platform capable of collecting highly informative, yet highly complex, chemical data for a variety of samples. Fisher-ratio (F-ratio) analysis applied to the supervised comparison of sample classes algorithmically reduces complex GC × GC-TOFMS data sets to find class distinguishing chemical features. F-ratio analysis, using a tile-based algorithm, significantly reduces the adverse effects of chromatographic misalignment and spurious covariance of the detected signal, enhancing the discovery of true positives while simultaneously reducing the likelihood of detecting false positives. Herein, we report a study using tile-based F-ratio analysis whereby four non-native analytes were spiked into diesel fuel at several concentrations ranging from 0 to 100 ppm. Spike level comparisons were performed in two regimes: comparing the spiked samples to the nonspiked fuel matrix and to each other at relative concentration factors of two. Redundant hits were algorithmically removed by refocusing the tiled results onto the original high resolution pixel level data. To objectively limit the tile-based F-ratio results to only features which are statistically likely to be true positives, we developed a combinatorial technique using null class comparisons, called null distribution analysis, by which we determined a statistically defensible F-ratio cutoff for the analysis of the hit list. After applying null distribution analysis, spiked analytes were reliably discovered at ∼1 to ∼10 ppm (∼5 to ∼50 pg using a 200:1 split), depending upon the degree of mass spectral selectivity and 2D chromatographic resolution, with minimal occurrence of false positives. To place the relevance of this work among other methods in this field, results are compared to those for pixel and peak table-based approaches.
Yeast Genes Controlling Responses to Topogenic Signals in a Model Transmembrane Protein
Tipper, Donald J.; Harley, Carol A
2002-01-01
Yeast protein insertion orientation (PIO) mutants were isolated by selecting for growth on sucrose in cells in which the only source of invertase is a C-terminal fusion to a transmembrane protein. Only the fraction with an exocellular C terminus can be processed to secreted invertase and this fraction is constrained to 2–3% by a strong charge difference signal. Identified pio mutants increased this to 9–12%. PIO1 is SPF1, encoding a P-type ATPase located in the endoplasmic reticulum (ER) or Golgi. spf1-null mutants are modestly sensitive to EGTA. Sensitivity is considerably greater in an spf1 pmr1 double mutant, although PIO is not further disturbed. Pmr1p is the Golgi Ca2+ ATPase and Spf1p may be the equivalent ER pump. PIO2 is STE24, a metalloprotease anchored in the ER membrane. Like Spf1p, Ste24p is expressed in all yeast cell types and belongs to a highly conserved protein family. The effects of ste24- and spf1-null mutations on invertase secretion are additive, cell generation time is increased 60%, and cells become sensitive to cold and to heat shock. Ste24p and Rce1p cleave the C-AAX bond of farnesylated CAAX box proteins. The closest paralog of SPF1 is YOR291w. Neither rce1-null nor yor291w-null mutations affected PIO or the phenotype of spf1- or ste24-null mutants. Mutations in PIO3 (unidentified) cause a weaker Pio phenotype, enhanced by a null mutation in BMH1, one of two yeast 14-3-3 proteins. PMID:11950929
maLPA1-null mice as an endophenotype of anxious depression
Moreno-Fernández, R D; Pérez-Martín, M; Castilla-Ortega, E; Rosell del Valle, C; García-Fernández, M I; Chun, J; Estivill-Torrús, G; Rodríguez de Fonseca, F; Santín, L J; Pedraza, C
2017-01-01
Anxious depression is a prevalent disease with devastating consequences and a poor prognosis. Nevertheless, the neurobiological mechanisms underlying this mood disorder remain poorly characterized. The LPA1 receptor is one of the six characterized G protein-coupled receptors (LPA1–6) through which lysophosphatidic acid acts as an intracellular signalling molecule. The loss of this receptor induces anxiety and several behavioural and neurobiological changes that have been strongly associated with depression. In this study, we sought to investigate the involvement of the LPA1 receptor in mood. We first examined hedonic and despair-like behaviours in wild-type and maLPA1 receptor null mice. Owing to the behavioural response exhibited by the maLPA1-null mice, the panic-like reaction was assessed. In addition, c-Fos expression was evaluated as a measure of the functional activity, followed by interregional correlation matrices to establish the brain map of functional activation. maLPA1-null mice exhibited anhedonia, agitation and increased stress reactivity, behaviours that are strongly associated with the psychopathological endophenotype of depression with anxiety features. Furthermore, the functional brain maps differed between the genotypes. The maLPA1-null mice showed increased limbic-system activation, similar to that observed in depressive patients. Antidepressant treatment induced behavioural improvements and functional brain normalisation. Finally, based on validity criteria, maLPA1-null mice are proposed as an animal model of anxious depression. Here, for we believe the first time, we have identified a possible relationship between the LPA1 receptor and anxious depression, shedding light on the unknown neurobiological basis of this subtype of depression and providing an opportunity to explore new therapeutic targets for the treatment of mood disorders, especially for the anxious subtype of depression. PMID:28375206
Chakraborty, Anirban; Wakamiya, Maki; Venkova-Canova, Tatiana; Pandita, Raj K.; Aguilera-Aguirre, Leopoldo; Sarker, Altaf H.; Singh, Dharmendra Kumar; Hosoki, Koa; Wood, Thomas G.; Sharma, Gulshan; Cardenas, Victor; Sarkar, Partha S.; Sur, Sanjiv; Pandita, Tej K.; Boldogh, Istvan; Hazra, Tapas K.
2015-01-01
Why mammalian cells possess multiple DNA glycosylases (DGs) with overlapping substrate ranges for repairing oxidatively damaged bases via the base excision repair (BER) pathway is a long-standing question. To determine the biological role of these DGs, null animal models have been generated. Here, we report the generation and characterization of mice lacking Neil2 (Nei-like 2). As in mice deficient in each of the other four oxidized base-specific DGs (OGG1, NTH1, NEIL1, and NEIL3), Neil2-null mice show no overt phenotype. However, middle-aged to old Neil2-null mice show the accumulation of oxidative genomic damage, mostly in the transcribed regions. Immuno-pulldown analysis from wild-type (WT) mouse tissue showed the association of NEIL2 with RNA polymerase II, along with Cockayne syndrome group B protein, TFIIH, and other BER proteins. Chromatin immunoprecipitation analysis from mouse tissue showed co-occupancy of NEIL2 and RNA polymerase II only on the transcribed genes, consistent with our earlier in vitro findings on NEIL2's role in transcription-coupled BER. This study provides the first in vivo evidence of genomic region-specific repair in mammals. Furthermore, telomere loss and genomic instability were observed at a higher frequency in embryonic fibroblasts from Neil2-null mice than from the WT. Moreover, Neil2-null mice are much more responsive to inflammatory agents than WT mice. Taken together, our results underscore the importance of NEIL2 in protecting mammals from the development of various pathologies that are linked to genomic instability and/or inflammation. NEIL2 is thus likely to play an important role in long term genomic maintenance, particularly in long-lived mammals such as humans. PMID:26245904
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang Cheng; Behr, Melissa; Xie Fang
2008-02-15
Chloroform causes hepatic and renal toxicity in a number of species. In vitro studies have indicated that chloroform can be metabolized by P450 enzymes in the kidney to nephrotoxic intermediate, although direct in vivo evidence for the role of renal P450 in the nephrotoxicity has not been reported. This study was to determine whether chloroform renal toxicity persists in a mouse model with a liver-specific deletion of the P450 reductase (Cpr) gene (liver-Cpr-null). Chloroform-induced renal toxicity and chloroform tissue levels were compared between the liver-Cpr-null and wild-type mice at 24 h following differing doses of chloroform. At a chloroform dosemore » of 150 mg/kg, the levels of blood urea nitrogen (BUN) were five times higher in the exposed group than in the vehicle-treated one for the liver-Cpr-null mice, but they were only slightly higher in the exposed group than in the vehicle-treated group for the wild-type mice. Severe lesions were found in the kidney of the liver-Cpr-null mice, while only mild lesions were found in the wild-type mice. At a chloroform dose of 300 mg/kg, severe kidney lesions were observed in both strains, yet the BUN levels were still higher in the liver-Cpr-null than in the wild-type mice. Higher chloroform levels were found in the tissues of the liver-Cpr-null mice. These findings indicated that loss of hepatic P450-dependent chloroform metabolism does not protect against chloroform-induced renal toxicity, suggesting that renal P450 enzymes play an essential role in chloroform renal toxicity.« less
maLPA1-null mice as an endophenotype of anxious depression.
Moreno-Fernández, R D; Pérez-Martín, M; Castilla-Ortega, E; Rosell Del Valle, C; García-Fernández, M I; Chun, J; Estivill-Torrús, G; Rodríguez de Fonseca, F; Santín, L J; Pedraza, C
2017-04-04
Anxious depression is a prevalent disease with devastating consequences and a poor prognosis. Nevertheless, the neurobiological mechanisms underlying this mood disorder remain poorly characterized. The LPA1 receptor is one of the six characterized G protein-coupled receptors (LPA1-6) through which lysophosphatidic acid acts as an intracellular signalling molecule. The loss of this receptor induces anxiety and several behavioural and neurobiological changes that have been strongly associated with depression. In this study, we sought to investigate the involvement of the LPA1 receptor in mood. We first examined hedonic and despair-like behaviours in wild-type and maLPA1 receptor null mice. Owing to the behavioural response exhibited by the maLPA1-null mice, the panic-like reaction was assessed. In addition, c-Fos expression was evaluated as a measure of the functional activity, followed by interregional correlation matrices to establish the brain map of functional activation. maLPA1-null mice exhibited anhedonia, agitation and increased stress reactivity, behaviours that are strongly associated with the psychopathological endophenotype of depression with anxiety features. Furthermore, the functional brain maps differed between the genotypes. The maLPA1-null mice showed increased limbic-system activation, similar to that observed in depressive patients. Antidepressant treatment induced behavioural improvements and functional brain normalisation. Finally, based on validity criteria, maLPA1-null mice are proposed as an animal model of anxious depression. Here, for we believe the first time, we have identified a possible relationship between the LPA1 receptor and anxious depression, shedding light on the unknown neurobiological basis of this subtype of depression and providing an opportunity to explore new therapeutic targets for the treatment of mood disorders, especially for the anxious subtype of depression.
Repicky, Sarah; Broadie, Kendal
2009-02-01
Loss of the mRNA-binding protein FMRP results in the most common inherited form of both mental retardation and autism spectrum disorders: fragile X syndrome (FXS). The leading FXS hypothesis proposes that metabotropic glutamate receptor (mGluR) signaling at the synapse controls FMRP function in the regulation of local protein translation to modulate synaptic transmission strength. In this study, we use the Drosophila FXS disease model to test the relationship between Drosophila FMRP (dFMRP) and the sole Drosophila mGluR (dmGluRA) in regulation of synaptic function, using two-electrode voltage-clamp recording at the glutamatergic neuromuscular junction (NMJ). Null dmGluRA mutants show minimal changes in basal synapse properties but pronounced defects during sustained high-frequency stimulation (HFS). The double null dfmr1;dmGluRA mutant shows repression of enhanced augmentation and delayed onset of premature long-term facilitation (LTF) and strongly reduces grossly elevated post-tetanic potentiation (PTP) phenotypes present in dmGluRA-null animals. Null dfmr1 mutants show features of synaptic hyperexcitability, including multiple transmission events in response to a single stimulus and cyclic modulation of transmission amplitude during prolonged HFS. The double null dfmr1;dmGluRA mutant shows amelioration of these defects but does not fully restore wildtype properties in dfmr1-null animals. These data suggest that dmGluRA functions in a negative feedback loop in which excess glutamate released during high-frequency transmission binds the glutamate receptor to dampen synaptic excitability, and dFMRP functions to suppress the translation of proteins regulating this synaptic excitability. Removal of the translational regulator partially compensates for loss of the receptor and, similarly, loss of the receptor weakly compensates for loss of the translational regulator.
Arnold, Shanna A.; Rivera, Lee B.; Carbon, Juliet G.; Toombs, Jason E.; Chang, Chi-Lun; Bradshaw, Amy D.; Brekken, Rolf A.
2012-01-01
Pancreatic adenocarcinoma, a desmoplastic disease, is the fourth leading cause of cancer-related death in the Western world due, in large part, to locally invasive primary tumor growth and ensuing metastasis. SPARC is a matricellular protein that governs extracellular matrix (ECM) deposition and maturation during tissue remodeling, particularly, during wound healing and tumorigenesis. In the present study, we sought to determine the mechanism by which lack of host SPARC alters the tumor microenvironment and enhances invasion and metastasis of an orthotopic model of pancreatic cancer. We identified that levels of active TGFβ1 were increased significantly in tumors grown in SPARC-null mice. TGFβ1 contributes to many aspects of tumor development including metastasis, endothelial cell permeability, inflammation and fibrosis, all of which are altered in the absence of stromal-derived SPARC. Given these results, we performed a survival study to assess the contribution of increased TGFβ1 activity to tumor progression in SPARC-null mice using losartan, an angiotensin II type 1 receptor antagonist that diminishes TGFβ1 expression and activation in vivo. Tumors grown in SPARC-null mice progressed more quickly than those grown in wild-type littermates leading to a significant reduction in median survival. However, median survival of SPARC-null animals treated with losartan was extended to that of losartan-treated wild-type controls. In addition, losartan abrogated TGFβ induced gene expression, reduced local invasion and metastasis, decreased vascular permeability and altered the immune profile of tumors grown in SPARC-null mice. These data support the concept that aberrant TGFβ1-activation in the absence of host SPARC contributes significantly to tumor progression and suggests that SPARC, by controlling ECM deposition and maturation, can regulate TGFβ availability and activation. PMID:22348081
Human dental pulp pluripotent-like stem cells promote wound healing and muscle regeneration.
Martínez-Sarrà, Ester; Montori, Sheyla; Gil-Recio, Carlos; Núñez-Toldrà, Raquel; Costamagna, Domiziana; Rotini, Alessio; Atari, Maher; Luttun, Aernout; Sampaolesi, Maurilio
2017-07-27
Dental pulp represents an easily accessible autologous source of adult stem cells. A subset of these cells, named dental pulp pluripotent-like stem cells (DPPSC), shows high plasticity and can undergo multiple population doublings, making DPPSC an appealing tool for tissue repair or maintenance. DPPSC were harvested from the dental pulp of third molars extracted from young patients. Growth factors released by DPPSC were analysed using antibody arrays. Cells were cultured in specific differentiation media and their endothelial, smooth and skeletal muscle differentiation potential was evaluated. The therapeutic potential of DPPSC was tested in a wound healing mouse model and in two genetic mouse models of muscular dystrophy (Scid/mdx and Sgcb-null Rag2-null γc-null). DPPSC secreted several growth factors involved in angiogenesis and extracellular matrix deposition and improved vascularisation in all three murine models. Moreover, DPPSC stimulated re-epithelialisation and ameliorated collagen deposition and organisation in healing wounds. In dystrophic mice, DPPSC engrafted in the skeletal muscle of both dystrophic murine models and showed integration in muscular fibres and vessels. In addition, DPPSC treatment resulted in reduced fibrosis and collagen content, larger cross-sectional area of type II fast-glycolytic fibres and infiltration of higher numbers of proangiogenic CD206 + macrophages. Overall, DPPSC represent a potential source of stem cells to enhance the wound healing process and slow down dystrophic muscle degeneration.
Omnibus Risk Assessment via Accelerated Failure Time Kernel Machine Modeling
Sinnott, Jennifer A.; Cai, Tianxi
2013-01-01
Summary Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai et al., 2011). In this paper, we derive testing and prediction methods for KM regression under the accelerated failure time model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. PMID:24328713
Robustness of survival estimates for radio-marked animals
Bunck, C.M.; Chen, C.-L.
1992-01-01
Telemetry techniques are often used to study the survival of birds and mammals; particularly whcn mark-recapture approaches are unsuitable. Both parametric and nonparametric methods to estimate survival have becn developed or modified from other applications. An implicit assumption in these approaches is that the probability of re-locating an animal with a functioning transmitter is one. A Monte Carlo study was conducted to determine the bias and variance of the Kaplan-Meier estimator and an estimator based also on the assumption of constant hazard and to eva!uate the performance of the two-sample tests associated with each. Modifications of each estimator which allow a re-Iocation probability of less than one are described and evaluated. Generallv the unmodified estimators were biased but had lower variance. At low sample sizes all estimators performed poorly. Under the null hypothesis, the distribution of all test statistics reasonably approximated the null distribution when survival was low but not when it was high. The power of the two-sample tests were similar.
Rank-based permutation approaches for non-parametric factorial designs.
Umlauft, Maria; Konietschke, Frank; Pauly, Markus
2017-11-01
Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.
Chandrasekharan, Unni M.; Siemionow, Maria; Unsal, Murat; Yang, Lin; Poptic, Earl; Bohn, Justin; Ozer, Kagan; Zhou, Zhongmin; Howe, Philip H.; Penn, Marc
2007-01-01
Tumor necrosis factor-α (TNF-α) binds to 2 distinct cell-surface receptors: TNF-α receptor-I (TNFR-I: p55) and TNF-α receptor-II (TNFR-II: p75). TNF-α induces leukocyte adhesion molecules on endothelial cells (ECs), which mediate 3 defined steps of the inflammatory response; namely, leukocyte rolling, firm adhesion, and transmigration. In this study, we have investigated the role of p75 in TNF-α–induced leukocyte adhesion molecules using cultured ECs derived from wild-type (WT), p75-null (p75−/−), or p55-null (p55−/−) mice. We observed that p75 was essential for TNF-α–induced E-selectin, vascular cell adhesion molecule 1 (VCAM-1), and intercellular adhesion molecule 1 (ICAM-1) expression. We also investigated the putative role of p75 in inflammation in vivo using an intravital microscopic approach with a mouse cremaster muscle model. TNF-α–stimulated leukocyte rolling, firm adhesion to ECs, and transmigration were dramatically reduced in p75−/− mice. Transplanted WT cremaster in p75−/− mice showed a robust leukocyte rolling and firm adhesion upon TNF-α activation, suggesting that the impairment in EC-leukocyte interaction in p75−/− mice is due to EC dysfunction. These results demonstrate, for the first time, that endothelial p75 is essential for TNF-α–induced leukocyte–endothelial-cell interaction. Our findings may contribute to the identification of novel p75-targeted therapeutic approaches for inflammatory diseases. PMID:17068152
Biological and genetic properties of the p53 null preneoplastic mammary epithelium
NASA Technical Reports Server (NTRS)
Medina, Daniel; Kittrell, Frances S.; Shepard, Anne; Stephens, L. Clifton; Jiang, Cheng; Lu, Junxuan; Allred, D. Craig; McCarthy, Maureen; Ullrich, Robert L.
2002-01-01
The absence of the tumor suppressor gene p53 confers an increased tumorigenic risk for mammary epithelial cells. In this report, we describe the biological and genetic properties of the p53 null preneoplastic mouse mammary epithelium in a p53 wild-type environment. Mammary epithelium from p53 null mice was transplanted serially into the cleared mammary fat pads of p53 wild-type BALB/c female to develop stable outgrowth lines. The outgrowth lines were transplanted for 10 generations. The outgrowths were ductal in morphology and progressed through ductal hyperplasia and ductal carcinoma in situ before invasive cancer. The preneoplastic outgrowth lines were immortal and exhibited activated telomerase activity. They are estrogen and progesterone receptor-positive, and aneuploid, and had various levels of tumorigenic potential. The biological and genetic properties of these lines are distinct from those found in most hyperplastic alveolar outgrowth lines, the form of mammary preneoplasia occurring in most traditional models of murine mammary tumorigenesis. These results indicate that the preneoplastic cell populations found in this genetically engineered model are similar in biological properties to a subset of precurser lesions found in human breast cancer and provide a unique model to identify secondary events critical for tumorigenicity and invasiveness.
Semmens, Brice X; Auster, Peter J; Paddack, Michelle J
2010-01-27
Marine protected area (MPA) networks have been proposed as a principal method for conserving biological diversity, yet patterns of diversity may ultimately complicate or compromise the development of such networks. We show how a series of ecological null models can be applied to assemblage data across sites in order to identify non-random biological patterns likely to influence the effectiveness of MPA network design. We use fish census data from Caribbean fore-reefs as a test system and demonstrate that: 1) site assemblages were nested, such that species found on sites with relatively few species were subsets of those found on sites with relatively many species, 2) species co-occurred across sites more than expected by chance once species-habitat associations were accounted for, and 3) guilds were most evenly represented at the richest sites and richness among all guilds was correlated (i.e., species and trophic diversity were closely linked). These results suggest that the emerging Caribbean marine protected area network will likely be successful at protecting regional diversity even if planning is largely constrained by insular, inventory-based design efforts. By recasting ecological null models as tests of assemblage patterns likely to influence management action, we demonstrate how these classic tools of ecological theory can be brought to bear in applied conservation problems.
On the Interpretation and Use of Mediation: Multiple Perspectives on Mediation Analysis
Agler, Robert; De Boeck, Paul
2017-01-01
Mediation analysis has become a very popular approach in psychology, and it is one that is associated with multiple perspectives that are often at odds, often implicitly. Explicitly discussing these perspectives and their motivations, advantages, and disadvantages can help to provide clarity to conversations and research regarding the use and refinement of mediation models. We discuss five such pairs of perspectives on mediation analysis, their associated advantages and disadvantages, and their implications: with vs. without a mediation hypothesis, specific effects vs. a global model, directness vs. indirectness of causation, effect size vs. null hypothesis testing, and hypothesized vs. alternative explanations. Discussion of the perspectives is facilitated by a small simulation study. Some philosophical and linguistic considerations are briefly discussed, as well as some other perspectives we do not develop here. PMID:29187828
Optical Vector Receiver Operating Near the Quantum Limit
NASA Astrophysics Data System (ADS)
Vilnrotter, V. A.; Lau, C.-W.
2005-05-01
An optical receiver concept for binary signals with performance approaching the quantum limit at low average-signal energies is developed and analyzed. A conditionally nulling receiver that reaches the quantum limit in the absence of background photons has been devised by Dolinar. However, this receiver requires ideal optical combining and complicated real-time shaping of the local field; hence, it tends to be difficult to implement at high data rates. A simpler nulling receiver that approaches the quantum limit without complex optical processing, suitable for high-rate operation, had been suggested earlier by Kennedy. Here we formulate a vector receiver concept that incorporates the Kennedy receiver with a physical beamsplitter, but it also utilizes the reflected signal component to improve signal detection. It is found that augmenting the Kennedy receiver with classical coherent detection at the auxiliary beamsplitter output, and optimally processing the vector observations, always improves on the performance of the Kennedy receiver alone, significantly so at low average-photon rates. This is precisely the region of operation where modern codes approach channel capacity. It is also shown that the addition of background radiation has little effect on the performance of the coherent receiver component, suggesting a viable approach for near-quantum-limited performance in high background environments.
Meterwavelength Single-pulse Polarimetric Emission Survey. III. The Phenomenon of Nulling in Pulsars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, Rahul; Mitra, Dipanjan; Melikidze, George I., E-mail: rahulbasu.astro@gmail.com
A detailed analysis of nulling was conducted for the pulsars studied in the Meterwavelength Single-pulse Polarimetric Emission Survey. We characterized nulling in 36 pulsars including 17 pulsars where the phenomenon was reported for the first time. The most dominant nulls lasted for a short duration, less than five periods. Longer duration nulls extending to hundreds of periods were also seen in some cases. A careful analysis showed the presence of periodicities in the transition from the null to the burst states in 11 pulsars. In our earlier work, fluctuation spectrum analysis showed multiple periodicities in 6 of these 11 pulsars.more » We demonstrate that the longer periodicity in each case was associated with nulling. The shorter periodicities usually originate from subpulse drifting. The nulling periodicities were more aligned with the periodic amplitude modulation, indicating a possible common origin for both. The most prevalent nulls last for a single period and can be potentially explained using random variations affecting the plasma processes in the pulsar magnetosphere. On the other hand, longer-duration nulls require changes in the pair-production processes, which need an external triggering mechanism for the changes. The presence of periodic nulling puts an added constraint on the triggering mechanism, which also needs to be periodic.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Ruilong; Xie, Lun; He, Jiansen
Signatures of secondary islands are frequently observed in the magnetic reconnection regions of magnetotail plasmas. In this paper, magnetic structures with the secondary-island signatures observed by Cluster are reassembled by a fitting-reconstruction method. The results show three-dimensionally that a secondary island event can manifest the flux rope formed with an A{sub s}-type null and a B{sub s}-type null paired via their spines. We call this A{sub s}-spine-B{sub s}-like configuration the helically wrapped spine model. The reconstructed field lines wrap around the spine to form the flux rope, and an O-type topology is therefore seen on the plane perpendicular to themore » spine. Magnetized electrons are found to rotate on and cross the fan surface, suggesting that both the torsional-spine and the spine-fan reconnection take place in the configuration. Furthermore, detailed analysis implies that the spiral nulls and flux ropes were locally generated nearby the spacecraft in the reconnection outflow region, indicating that secondary reconnection may occur in the exhaust away from the primary reconnection site.« less
How to talk about protein‐level false discovery rates in shotgun proteomics
The, Matthew; Tasnim, Ayesha
2016-01-01
A frequently sought output from a shotgun proteomics experiment is a list of proteins that we believe to have been present in the analyzed sample before proteolytic digestion. The standard technique to control for errors in such lists is to enforce a preset threshold for the false discovery rate (FDR). Many consider protein‐level FDRs a difficult and vague concept, as the measurement entities, spectra, are manifestations of peptides and not proteins. Here, we argue that this confusion is unnecessary and provide a framework on how to think about protein‐level FDRs, starting from its basic principle: the null hypothesis. Specifically, we point out that two competing null hypotheses are used concurrently in today's protein inference methods, which has gone unnoticed by many. Using simulations of a shotgun proteomics experiment, we show how confusing one null hypothesis for the other can lead to serious discrepancies in the FDR. Furthermore, we demonstrate how the same simulations can be used to verify FDR estimates of protein inference methods. In particular, we show that, for a simple protein inference method, decoy models can be used to accurately estimate protein‐level FDRs for both competing null hypotheses. PMID:27503675
Internal null controllability of a linear Schrödinger-KdV system on a bounded interval
NASA Astrophysics Data System (ADS)
Araruna, Fágner D.; Cerpa, Eduardo; Mercado, Alberto; Santos, Maurício C.
2016-01-01
The control of a linear dispersive system coupling a Schrödinger and a linear Korteweg-de Vries equation is studied in this paper. The system can be viewed as three coupled real-valued equations by taking real and imaginary parts in the Schrödinger equation. The internal null controllability is proven by using either one complex-valued control on the Schrödinger equation or two real-valued controls, one on each equation. Notice that the single Schrödinger equation is not known to be controllable with a real-valued control. The standard duality method is used to reduce the controllability property to an observability inequality, which is obtained by means of a Carleman estimates approach.
On Some Assumptions of the Null Hypothesis Statistical Testing
ERIC Educational Resources Information Center
Patriota, Alexandre Galvão
2017-01-01
Bayesian and classical statistical approaches are based on different types of logical principles. In order to avoid mistaken inferences and misguided interpretations, the practitioner must respect the inference rules embedded into each statistical method. Ignoring these principles leads to the paradoxical conclusions that the hypothesis…
Yang, Yang; DeGruttola, Victor
2016-01-01
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
MAGNETIC NULL POINTS IN KINETIC SIMULATIONS OF SPACE PLASMAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olshevsky, Vyacheslav; Innocenti, Maria Elena; Cazzola, Emanuele
2016-03-01
We present a systematic attempt to study magnetic null points and the associated magnetic energy conversion in kinetic particle-in-cell simulations of various plasma configurations. We address three-dimensional simulations performed with the semi-implicit kinetic electromagnetic code iPic3D in different setups: variations of a Harris current sheet, dipolar and quadrupolar magnetospheres interacting with the solar wind, and a relaxing turbulent configuration with multiple null points. Spiral nulls are more likely created in space plasmas: in all our simulations except lunar magnetic anomaly (LMA) and quadrupolar mini-magnetosphere the number of spiral nulls prevails over the number of radial nulls by a factor of 3–9.more » We show that often magnetic nulls do not indicate the regions of intensive energy dissipation. Energy dissipation events caused by topological bifurcations at radial nulls are rather rare and short-lived. The so-called X-lines formed by the radial nulls in the Harris current sheet and LMA simulations are rather stable and do not exhibit any energy dissipation. Energy dissipation is more powerful in the vicinity of spiral nulls enclosed by magnetic flux ropes with strong currents at their axes (their cross sections resemble 2D magnetic islands). These null lines reminiscent of Z-pinches efficiently dissipate magnetic energy due to secondary instabilities such as the two-stream or kinking instability, accompanied by changes in magnetic topology. Current enhancements accompanied by spiral nulls may signal magnetic energy conversion sites in the observational data.« less
2010 August 1–2 Sympathetic Eruptions. II. Magnetic Topology of the MHD Background Field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Titov, Viacheslav S.; Mikić, Zoran; Török, Tibor
Using a potential field source-surface (PFSS) model, we recently analyzed the global topology of the background coronal magnetic field for a sequence of coronal mass ejections (CMEs) that occurred on 2010 August 1–2. Here we repeat this analysis for the background field reproduced by a magnetohydrodynamic (MHD) model that incorporates plasma thermodynamics. As for the PFSS model, we find that all three CME source regions contain a coronal hole (CH) that is separated from neighboring CHs by topologically very similar pseudo-streamer structures. However, the two models yield very different results for the size, shape, and flux of the CHs. Wemore » find that the helmet-streamer cusp line, which corresponds to a source-surface null line in the PFSS model, is structurally unstable and does not form in the MHD model. Our analysis indicates that, generally, in MHD configurations, this line instead consists of a multiple-null separator passing along the edge of disconnected-flux regions. Some of these regions are transient and may be the origin of the so-called streamer blobs. We show that the core topological structure of such blobs is a three-dimensional “plasmoid” consisting of two conjoined flux ropes of opposite handedness, which connect at a spiral null point of the magnetic field. Our analysis reveals that such plasmoids also appear in pseudo-streamers on much smaller scales. These new insights into the coronal magnetic topology provide some intriguing implications for solar energetic particle events and for the properties of the slow solar wind.« less
2010 August 1-2 Sympathetic Eruptions. II. Magnetic Topology of the MHD Background Field
NASA Astrophysics Data System (ADS)
Titov, Viacheslav S.; Mikić, Zoran; Török, Tibor; Linker, Jon A.; Panasenco, Olga
2017-08-01
Using a potential field source-surface (PFSS) model, we recently analyzed the global topology of the background coronal magnetic field for a sequence of coronal mass ejections (CMEs) that occurred on 2010 August 1-2. Here we repeat this analysis for the background field reproduced by a magnetohydrodynamic (MHD) model that incorporates plasma thermodynamics. As for the PFSS model, we find that all three CME source regions contain a coronal hole (CH) that is separated from neighboring CHs by topologically very similar pseudo-streamer structures. However, the two models yield very different results for the size, shape, and flux of the CHs. We find that the helmet-streamer cusp line, which corresponds to a source-surface null line in the PFSS model, is structurally unstable and does not form in the MHD model. Our analysis indicates that, generally, in MHD configurations, this line instead consists of a multiple-null separator passing along the edge of disconnected-flux regions. Some of these regions are transient and may be the origin of the so-called streamer blobs. We show that the core topological structure of such blobs is a three-dimensional “plasmoid” consisting of two conjoined flux ropes of opposite handedness, which connect at a spiral null point of the magnetic field. Our analysis reveals that such plasmoids also appear in pseudo-streamers on much smaller scales. These new insights into the coronal magnetic topology provide some intriguing implications for solar energetic particle events and for the properties of the slow solar wind.
Lee, Won-Jae; Banavara, Dattatreya S.; Hughes, Joanne E.; Christiansen, Jason K.; Steele, James L.; Broadbent, Jeffery R.; Rankin, Scott A.
2007-01-01
Catabolism of sulfur-containing amino acids plays an important role in the development of cheese flavor. During ripening, cystathionine β-lyase (CBL) is believed to contribute to the formation of volatile sulfur compounds (VSCs) such as methanethiol and dimethyl disulfide. However, the role of CBL in the generation of VSCs from the catabolism of specific sulfur-containing amino acids is not well characterized. The objective of this study was to investigate the role of CBL in VSC formation by Lactobacillus helveticus CNRZ 32 using genetic variants of L. helveticus CNRZ 32 including the CBL-null mutant, complementation of the CBL-null mutant, and the CBL overexpression mutant. The formation of VSCs from methionine, cystathionine, and cysteine was determined in a model system using gas chromatography-mass spectrometry with solid-phase microextraction. With methionine as a substrate, CBL overexpression resulted in higher VSC production than that of wild-type L. helveticus CNRZ 32 or the CBL-null mutant. However, there were no differences in VSC production between the wild type and the CBL-null mutant. With cystathionine, methanethiol production was detected from the CBL overexpression variant and complementation of the CBL-null mutant, implying that CBL may be involved in the conversion of cystathionine to methanethiol. With cysteine, no differences in VSC formation were observed between the wild type and genetic variants, indicating that CBL does not contribute to the conversion of cysteine. PMID:17337535
Tolerance analysis of null lenses using an end-use system performance criterion
NASA Astrophysics Data System (ADS)
Rodgers, J. Michael
2000-07-01
An effective method of assigning tolerances to a null lens is to determine the effects of null-lens fabrication and alignment errors on the end-use system itself, not simply the null lens. This paper describes a method to assign null- lens tolerances based on their effect on any performance parameter of the end-use system.
Theoretical size distribution of fossil taxa: analysis of a null model.
Reed, William J; Hughes, Barry D
2007-03-22
This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
Qu, Wei; Diwan, Bhalchandra A.; Liu, Jie; Goyer, Robert A.; Dawson, Tammy; Horton, John L.; Cherian, M. George; Waalkes, Michael P.
2002-01-01
Susceptibility to lead toxicity in MT-null mice and cells, lacking the major forms of the metallothionein (MT) gene, was compared to wild-type (WT) mice or cells. Male MT-null and WT mice received lead in the drinking water (0 to 4000 ppm) for 10 to 20 weeks. Lead did not alter body weight in any group. Unlike WT mice, lead-treated MT-null mice showed dose-related nephromegaly. In addition, after lead exposure renal function was significantly diminished in MT-null mice in comparison to WT mice. MT-null mice accumulated less renal lead than WT mice and did not form lead inclusion bodies, which were present in the kidneys of WT mice. In gene array analysis, renal glutathione S-transferases were up-regulated after lead in MT-null mice only. In vitro studies on fibroblast cell lines derived from MT-null and WT mice showed that MT-null cells were much more sensitive to lead cytotoxicity. MT-null cells accumulated less lead and formed no inclusion bodies. The MT-null phenotype seems to preclude lead-induced inclusion body formation and increases lead toxicity at the organ and cellular level despite reducing lead accumulation. This study reveals important roles for MT in chronic lead toxicity, lead accumulation, and inclusion body formation. PMID:11891201
Adaptive Nulling for the Terrestrial Planet Finder Interferometer
NASA Technical Reports Server (NTRS)
Peters, Robert D.; Lay, Oliver P.; Jeganathan, Muthu; Hirai, Akiko
2006-01-01
A description of adaptive nulling for Terrestrial Planet Finder Interferometer (TPFI) is presented. The topics include: 1) Nulling in TPF-I; 2) Why Do Adaptive Nulling; 3) Parallel High-Order Compensator Design; 4) Phase and Amplitude Control; 5) Development Activates; 6) Requirements; 7) Simplified Experimental Setup; 8) Intensity Correction; and 9) Intensity Dispersion Stability. A short summary is also given on adaptive nulling for the TPFI.
Bayes Factor Approaches for Testing Interval Null Hypotheses
ERIC Educational Resources Information Center
Morey, Richard D.; Rouder, Jeffrey N.
2011-01-01
Psychological theories are statements of constraint. The role of hypothesis testing in psychology is to test whether specific theoretical constraints hold in data. Bayesian statistics is well suited to the task of finding supporting evidence for constraint, because it allows for comparing evidence for 2 hypotheses against each another. One issue…
Facilitating Intellectual Liberation, Engaging the Null Curriculum, and WebCT
ERIC Educational Resources Information Center
Wojcik, Teresa G.; Titone, Connie
2015-01-01
College professors seek to create intellectual experiences that free students from false perceptions and incomplete truths. This article explores one curricular decision and an accompanying pedagogical approach which, the authors argue, facilitates such a liberating experience. In the online environment of WebCT, students post their reactions to…
Implosive Collapse about Magnetic Null Points: A Quantitative Comparison between 2D and 3D Nulls
NASA Astrophysics Data System (ADS)
Thurgood, Jonathan O.; Pontin, David I.; McLaughlin, James A.
2018-03-01
Null collapse is an implosive process whereby MHD waves focus their energy in the vicinity of a null point, forming a current sheet and initiating magnetic reconnection. We consider, for the first time, the case of collapsing 3D magnetic null points in nonlinear, resistive MHD using numerical simulation, exploring key physical aspects of the system as well as performing a detailed parameter study. We find that within a particular plane containing the 3D null, the plasma and current density enhancements resulting from the collapse are quantitatively and qualitatively as per the 2D case in both the linear and nonlinear collapse regimes. However, the scaling with resistivity of the 3D reconnection rate—which is a global quantity—is found to be less favorable when the magnetic null point is more rotationally symmetric, due to the action of increased magnetic back-pressure. Furthermore, we find that, with increasing ambient plasma pressure, the collapse can be throttled, as is the case for 2D nulls. We discuss this pressure-limiting in the context of fast reconnection in the solar atmosphere and suggest mechanisms by which it may be overcome. We also discuss the implications of the results in the context of null collapse as a trigger mechanism of Oscillatory Reconnection, a time-dependent reconnection mechanism, and also within the wider subject of wave–null point interactions. We conclude that, in general, increasingly rotationally asymmetric nulls will be more favorable in terms of magnetic energy release via null collapse than their more symmetric counterparts.
Detoxification genes polymorphisms in SIDS exposed to tobacco smoke.
Filonzi, Laura; Magnani, Cinzia; Lavezzi, Anna Maria; Vaghi, Marina; Nosetti, Luana; Nonnis Marzano, Francesco
2018-03-30
The best hypothesis to explain Sudden Infant Death Syndrome (SIDS) pathogenesis is offered by the "triple risk model", which suggests that an interaction of different variables related to exogenous stressors and infant vulnerability may lead to the syndrome. Environmental factors are triggers that act during a particular sensible period, modulated by intrinsic genetic characteristics. Although literature data show that one of the major SIDS risk factors is smoking exposure, a specific involvement of molecular components has never been highlighted. Starting from these observations and considering the role of GSTT1 and GSTM1 genes functional polymorphisms in the detoxification process, we analyzed GSTM1 and GSTT1 null genotype frequencies in 47 SIDS exposed to tobacco smoke and 75 healthy individuals. A significant association (p < .0001) between the GSTM1 null genotype and SIDS exposed to smoke was found. On the contrary, no association between GSTT1 polymorphism and SIDS was determined. Results indicated the contribution of the GSTM1 -/- genotype resulting in null detoxification activity in SIDS cases, and led to a better comprehension of the triple risk model, highlighting smoking exposure as a real SIDS risk factor on a biochemical basis. Copyright © 2018. Published by Elsevier B.V.
Kwiatkowski, David J; Zhang, Hongbing; Bandura, Jennifer L; Heiberger, Kristina M; Glogauer, Michael; el-Hashemite, Nisreen; Onda, Hiroaki
2002-03-01
Tuberous sclerosis (TSC) is a autosomal dominant genetic disorder caused by mutations in either TSC1 or TSC2, and characterized by benign hamartoma growth. We developed a murine model of Tsc1 disease by gene targeting. Tsc1 null embryos die at mid-gestation from a failure of liver development. Tsc1 heterozygotes develop kidney cystadenomas and liver hemangiomas at high frequency, but the incidence of kidney tumors is somewhat lower than in Tsc2 heterozygote mice. Liver hemangiomas were more common, more severe and caused higher mortality in female than in male Tsc1 heterozygotes. Tsc1 null embryo fibroblast lines have persistent phosphorylation of the p70S6K (S6K) and its substrate S6, that is sensitive to treatment with rapamycin, indicating constitutive activation of the mTOR-S6K pathway due to loss of the Tsc1 protein, hamartin. Hyperphosphorylation of S6 is also seen in kidney tumors in the heterozygote mice, suggesting that inhibition of this pathway may have benefit in control of TSC hamartomas.
Functional characterization of malaria parasites deficient in the K+ channel Kch2.
Ellekvist, Peter; Mlambo, Godfree; Kumar, Nirbhay; Klaerke, Dan A
2017-11-04
K + channels are integral membrane proteins, which contribute to maintain vital parameters such as the cellular membrane potential and cell volume. Malaria parasites encode two K + channel homologues, Kch1 and Kch2, which are well-conserved among members of the Plasmodium genus. In the rodent malaria parasite P. berghei, the functional significance of K + channel homologue PbKch2 was studied using targeted gene knock-out. The knockout parasites were characterized in a mouse model in terms of growth-kinetics and infectivity in the mosquito vector. Furthermore, using a tracer-uptake technique with 86 Rb + as a K + congener, the K + transporting properties of the knockout parasites were assessed. Genetic disruption of Kch2 did not grossly affect the phenotype in terms of asexual replication and pathogenicity in a mouse model. In contrast to Kch1-null parasites, Kch2-null parasites were fully capable of forming oocysts in female Anopheles stephensi mosquitoes. 86 Rb + uptake in Kch2-deficient blood-stage P. berghei parasites (Kch2-null) did not differ from that of wild-type (WT) parasites. About two-thirds of the 86 Rb + uptake in WT and in Kch2-null parasites could be inhibited by K + channel blockers and could be inferred to the presence of functional Kch1 in Kch2 knockout parasites. Kch2 is therefore not required for transport of K + in P. berghei and is not essential to mosquito-stage sporogonic development of the parasite. Copyright © 2017 Elsevier Inc. All rights reserved.
Stadler, Tanja; Degnan, James H.; Rosenberg, Noah A.
2016-01-01
Classic null models for speciation and extinction give rise to phylogenies that differ in distribution from empirical phylogenies. In particular, empirical phylogenies are less balanced and have branching times closer to the root compared to phylogenies predicted by common null models. This difference might be due to null models of the speciation and extinction process being too simplistic, or due to the empirical datasets not being representative of random phylogenies. A third possibility arises because phylogenetic reconstruction methods often infer gene trees rather than species trees, producing an incongruity between models that predict species tree patterns and empirical analyses that consider gene trees. We investigate the extent to which the difference between gene trees and species trees under a combined birth–death and multispecies coalescent model can explain the difference in empirical trees and birth–death species trees. We simulate gene trees embedded in simulated species trees and investigate their difference with respect to tree balance and branching times. We observe that the gene trees are less balanced and typically have branching times closer to the root than the species trees. Empirical trees from TreeBase are also less balanced than our simulated species trees, and model gene trees can explain an imbalance increase of up to 8% compared to species trees. However, we see a much larger imbalance increase in empirical trees, about 100%, meaning that additional features must also be causing imbalance in empirical trees. This simulation study highlights the necessity of revisiting the assumptions made in phylogenetic analyses, as these assumptions, such as equating the gene tree with the species tree, might lead to a biased conclusion. PMID:26968785
Manual control of yaw motion with combined visual and vestibular cues
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1977-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
Light propagation in the averaged universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagheri, Samae; Schwarz, Dominik J., E-mail: s_bagheri@physik.uni-bielefeld.de, E-mail: dschwarz@physik.uni-bielefeld.de
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of themore » null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.« less
Severe changes in colon epithelium in the Mecp2-null mouse model of Rett syndrome.
Millar-Büchner, Pamela; Philp, Amber R; Gutierrez, Noemí; Villanueva, Sandra; Kerr, Bredford; Flores, Carlos A
2016-12-01
Rett syndrome is best known due to its severe and devastating symptoms in the central nervous system. It is produced by mutations affecting the Mecp2 gene that codes for a transcription factor. Nevertheless, evidence for MECP2 activity has been reported for tissues other than those of the central nervous system. Patients affected by Rett presented with intestinal affections whose origin is still not known. We have observed that the Mecp2-null mice presented with episodes of diarrhea, and decided to study the intestinal phenotype in these mice. Mecp2-null mice or bearing the conditional intestinal deletion of MECP2 were used. Morphometirc and histologic analysis of intestine, and RT-PCR, western blot and immunodetection were perfomed on intestinal samples of the animals. Electrical parameters of the intestine were determined by Ussing chamber experiments in freshly isolated colon samples. First we determined that MECP2 protein is mainly expressed in cells of the lower part of the colonic crypts and not in the small intestine. The colon of the Mecp2-null mice was shorter than that of the wild-type. Histological analysis showed that epithelial cells of the surface have abnormal localization of key membrane proteins like ClC-2 and NHE-3 that participate in the electroneutral NaCl absorption; nevertheless, electrogenic secretion and absorption remain unaltered. We also detected an increase in a proliferation marker in the crypts of the colon samples of the Mecp2-null mice, but the specific silencing of Mecp2 from intestinal epithelium was not able to recapitulate the intestinal phenotype of the Mecp2-null mice. In summary, we showed that the colon is severely affected by Mecp2 silencing in mice. Changes in colon length and epithelial histology are similar to those observed in colitis. Changes in the localization of proteins that participate in fluid absorption can explain watery stools, but the exclusive deletion of Mecp2 from the intestine did not reproduce colon changes observed in the Mecp2-null mice, indicating the participation of other cells in this phenotype and the complex interaction between different cell types in this disease.
Fermilab Education Office - Special Events for Students and Families
students and families. These include: null Fermilab Outdoor Family Fair (K-12) null Wonders of Science (2-7 ) null Family Open House (3-12) null STEM Career Expo (9-12) Search Programs - Search Science Adventures
Beaton, Kara H.; Huffman, W. Cary; Schubert, Michael C.
2015-01-01
Increased ocular positioning misalignments upon exposure to altered gravity levels (g-levels) have been strongly correlated with space motion sickness (SMS) severity, possibly due to underlying otolith asymmetries uncompensated in novel gravitational environments. We investigated vertical and torsional ocular positioning misalignments elicited by the 0 and 1.8 g g-levels of parabolic flight and used these data to develop a computational model to describe how such misalignments might arise. Ocular misalignments were inferred through two perceptual nulling tasks: Vertical Alignment Nulling (VAN) and Torsional Alignment Nulling (TAN). All test subjects exhibited significant differences in ocular misalignments in the novel g-levels, which we postulate to be the result of healthy individuals with 1 g-tuned central compensatory mechanisms unadapted to the parabolic flight environment. Furthermore, the magnitude and direction of ocular misalignments in hypo-g and hyper-g, in comparison to 1 g, were nonlinear and nonmonotonic. Previous linear models of central compensation do not predict this. Here we show that a single model of the form a + bgε, where a, b, and ε are the model parameters and g is the current g-level, accounts for both the vertical and torsional ocular misalignment data observed inflight. Furthering our understanding of oculomotor control is critical for the development of interventions that promote adaptation in spaceflight (e.g., countermeasures for novel g-level exposure) and terrestrial (e.g., rehabilitation protocols for vestibular pathology) environments. PMID:26082691
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Efficient Posterior Probability Mapping Using Savage-Dickey Ratios
Penny, William D.; Ridgway, Gerard R.
2013-01-01
Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640
Birth order has no effect on intelligence: a reply and extension of previous findings.
Wichman, Aaron L; Rodgers, Joseph Lee; Maccallum, Robert C
2007-09-01
We address points raised by Zajonc and Sulloway, who reject findings showing that birth order has no effect on intelligence. Many objections to findings of null birth-order results seem to stem from a misunderstanding of the difference between study designs where birth order is confounded with true causal influences on intelligence across families and designs that control for some of these influences. We discuss some of the consequences of not appreciating the nature of this difference. When between-family confounds are controlled using appropriate study designs and techniques such as multilevel modeling, birth order is shown not to influence intelligence. We conclude with an empirical investigation of the replicability and generalizability of this approach.
Generative models for network neuroscience: prospects and promise
Betzel, Richard F.
2017-01-01
Network neuroscience is the emerging discipline concerned with investigating the complex patterns of interconnections found in neural systems, and identifying principles with which to understand them. Within this discipline, one particularly powerful approach is network generative modelling, in which wiring rules are algorithmically implemented to produce synthetic network architectures with the same properties as observed in empirical network data. Successful models can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. Here, we review the prospects and promise of generative models for network neuroscience. We begin with a primer on network generative models, with a discussion of compressibility and predictability, and utility in intuiting mechanisms, followed by a short history on their use in network science, broadly. We then discuss generative models in practice and application, paying particular attention to the critical need for cross-validation. Next, we review generative models of biological neural networks, both at the cellular and large-scale level, and across a variety of species including Caenorhabditis elegans, Drosophila, mouse, rat, cat, macaque and human. We offer a careful treatment of a few relevant distinctions, including differences between generative models and null models, sufficiency and redundancy, inferring and claiming mechanism, and functional and structural connectivity. We close with a discussion of future directions, outlining exciting frontiers both in empirical data collection efforts as well as in method and theory development that, together, further the utility of the generative network modelling approach for network neuroscience. PMID:29187640
Modular Hamiltonians on the null plane and the Markov property of the vacuum state
NASA Astrophysics Data System (ADS)
Casini, Horacio; Testé, Eduardo; Torroba, Gonzalo
2017-09-01
We compute the modular Hamiltonians of regions having the future horizon lying on a null plane. For a CFT this is equivalent to regions with a boundary of arbitrary shape lying on the null cone. These Hamiltonians have a local expression on the horizon formed by integrals of the stress tensor. We prove this result in two different ways, and show that the modular Hamiltonians of these regions form an infinite dimensional Lie algebra. The corresponding group of unitary transformations moves the fields on the null surface locally along the null generators with arbitrary null line dependent velocities, but act non-locally outside the null plane. We regain this result in greater generality using more abstract tools on the algebraic quantum field theory. Finally, we show that modular Hamiltonians on the null surface satisfy a Markov property that leads to the saturation of the strong sub-additive inequality for the entropies and to the strong super-additivity of the relative entropy.
NASA Astrophysics Data System (ADS)
LaRue, James P.; Luzanov, Yuriy
2013-05-01
A new extension to the way in which the Bidirectional Associative Memory (BAM) algorithms are implemented is presented here. We will show that by utilizing the singular value decomposition (SVD) and integrating principles of independent component analysis (ICA) into the nullspace (NS) we have created a novel approach to mitigating spurious attractors. We demonstrate this with two applications. The first application utilizes a one-layer association while the second application is modeled after the several hierarchal associations of ventral pathways. The first application will detail the way in which we manage the associations in terms of matrices. The second application will take what we have learned from the first example and apply it to a cascade of a convolutional neural network (CNN) and perceptron this being our signal processing model of the ventral pathways, i.e., visual systems.
Reconstruction of stochastic temporal networks through diffusive arrival times
NASA Astrophysics Data System (ADS)
Li, Xun; Li, Xiang
2017-06-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications.
Interaction Analysis of Longevity Interventions Using Survival Curves.
Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim
2018-01-06
A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans . We find that interactions are generally weak even when the standard analysis indicates otherwise.
Interaction Analysis of Longevity Interventions Using Survival Curves
Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G.; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim
2018-01-01
A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans. We find that interactions are generally weak even when the standard analysis indicates otherwise. PMID:29316622
Reconstruction of stochastic temporal networks through diffusive arrival times
Li, Xun; Li, Xiang
2017-01-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687
Massless spinning particle and null-string on AdS d : projective-space approach
NASA Astrophysics Data System (ADS)
Uvarov, D. V.
2018-07-01
The massless spinning particle and the tensionless string models on an AdS d background in the projective-space realization are proposed as constrained Hamiltonian systems. Various forms of particle and string Lagrangians are derived and classical mechanics is studied including the Lax-type representation of the equations of motion. After that, the transition to the quantum theory is discussed. The analysis of potential anomalies in the tensionless string model necessitates the introduction of ghosts and BRST charge. It is shown that a quantum BRST charge is nilpotent for any d if coordinate-momentum ordering for the phase-space bosonic variables, Weyl ordering for the fermions and cb () ordering for the ghosts is chosen, while conformal reparametrizations and space-time dilatations turn out to be anomalous for ordering in terms of positive and negative Fourier modes of the phase-space variables and ghosts.
NASA Astrophysics Data System (ADS)
Mädler, Thomas
2013-05-01
Perturbations of the linearized vacuum Einstein equations in the Bondi-Sachs formulation of general relativity can be derived from a single master function with spin weight two, which is related to the Weyl scalar Ψ0, and which is determined by a simple wave equation. By utilizing a standard spin representation of tensors on a sphere and two different approaches to solve the master equation, we are able to determine two simple and explicitly time-dependent solutions. Both solutions, of which one is asymptotically flat, comply with the regularity conditions at the vertex of the null cone. For the asymptotically flat solution we calculate the corresponding linearized perturbations, describing all multipoles of spin-2 waves that propagate on a Minkowskian background spacetime. We also analyze the asymptotic behavior of this solution at null infinity using a Penrose compactification and calculate the Weyl scalar Ψ4. Because of its simplicity, the asymptotically flat solution presented here is ideally suited for test bed calculations in the Bondi-Sachs formulation of numerical relativity. It may be considered as a sibling of the Bergmann-Sachs or Teukolsky-Rinne solutions, on spacelike hypersurfaces, for a metric adapted to null hypersurfaces.
Omnibus risk assessment via accelerated failure time kernel machine modeling.
Sinnott, Jennifer A; Cai, Tianxi
2013-12-01
Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.
Variation in the ciliary neurotrophic factor gene and muscle strength in older Caucasian women.
Arking, Dan E; Fallin, Daniele M; Fried, Linda P; Li, Tao; Beamer, Brock A; Xue, Qian Li; Chakravarti, Aravinda; Walston, Jeremy
2006-05-01
To determine whether genetic variants in the ciliary neurotrophic factor (CNTF) gene are associated with muscle strength in older women. Cross-sectional analysis of baseline data from the Women's Health and Aging Studies I (1992) and II (1994), complementary population-based studies. Twelve contiguous ZIP code areas in Baltimore, Maryland. Three hundred sixty-three Caucasian, community-dwelling women aged 70 to 79. Participants were genotyped at the CNTF locus for eight single nucleotide polymorphisms (SNPs), including the null allele rs1800169. The dependent variables were grip strength and the frailty syndrome, identified as presence of three or more of five frailty indicators (weakness, slowness, weight loss, low physical activity, exhaustion). In addition to genotypes, independent variables of body mass index (BMI) and osteoarthritis of the hands were included. Using multivariate linear regression, single SNP analysis identified five SNPs significantly associated with grip strength (P<.05), after adjusting for age, BMI, and osteoarthritis. Haplotype analysis was performed, and a single haplotype associated with grip strength was identified (P<.01). The rs1800169 null allele fully explained the association between this haplotype and grip strength under a recessive model, with individuals homozygous for the null allele exhibiting a 3.80-kg lower (95% confidence interval=1.01-6.58) grip strength. No association was seen between the CNTF null allele and frailty. Individuals homozygous for the CNTF null allele had significantly lower grip strength but did not exhibit overt frailty. Larger prospective studies are needed to confirm this finding and extend it to additional populations.
Exact solutions to force-free electrodynamics in black hole backgrounds
NASA Astrophysics Data System (ADS)
Brennan, T. Daniel; Gralla, Samuel E.; Jacobson, Ted
2013-10-01
A shared property of several of the known exact solutions to the equations of force-free electrodynamics is that their charge-current four-vector is null. We examine the general properties of null-current solutions and then focus on the principal congruences of the Kerr black hole spacetime. We obtain a large class of exact solutions, which are in general time-dependent and non-axisymmetric. These solutions include waves that, surprisingly, propagate without scattering on the curvature of the black hole’s background. They may be understood as generalizations to Robinson’s solutions to vacuum electrodynamics associated with a shear-free congruence of null geodesics. When stationary and axisymmetric, our solutions reduce to those of Menon and Dermer, the only previously known solutions in Kerr. In Kerr, all of our solutions have null electromagnetic fields (\\vec{E} \\cdot \\vec{B} = 0 and E2 = B2). However, in Schwarzschild or flat spacetime there is freedom to add a magnetic monopole field, making the solutions magnetically dominated (B2 > E2). This freedom may be used to reproduce the various flat-spacetime and Schwarzschild-spacetime (split) monopole solutions available in the literature (due to Michel and later authors), and to obtain a large class of time-dependent, non-axisymmetric generalizations. These generalizations may be used to model the magnetosphere of a conducting star that rotates with arbitrary prescribed time-dependent rotation axis and speed. We thus significantly enlarge the class of known exact solutions, while organizing and unifying previously discovered solutions in terms of their null structure.
Magnetic Topology of the Global MHD Configuration on 2010 August 1-2
NASA Astrophysics Data System (ADS)
Titov, V. S.; Mikic, Z.; Torok, T.; Linker, J.; Panasenco, O.
2014-12-01
It appears that the global magnetic topology of the solar corona predetermines to a large extent the magnetic flux transfer during solar eruptions. We have recently analyzed the global topology for a source-surface model of the background magnetic field at the time of the 2010 August 1-2 sympathetic CMEs (Titov et al. 2012). Now we extend this analysis to a more accurate thermodynamic MHD model of the solar corona. As for the source-surface model, we find a similar triplet of pseudo-streamers in the source regions of the eruptions. The new study confirms that all these pseudo-streamers contain separatrix curtains that fan out from a basic magnetic null point, individual for each of the pseudo-streamers. In combination with the associated separatrix domes, these separatrix curtains fully isolate adjacent coronal holes of the like polarity from each other. However, the size and shape of the coronal holes, as well as their open magnetic fluxes and the fluxes in the lobes of the separatrix domes, are very different for the two models. The definition of the open separator field lines, where the (interchange) reconnection between open and closed magnetic flux takes place, is also modified, since the structurally unstable source-surface null lines do not exist anymore in the MHD model. In spite of all these differences, we reassert our earlier hypothesis that magnetic reconnection at these nulls and the associated separators likely plays a key role in coupling the successive eruptions observed by SDO and STEREO. The results obtained provide further validation of our recent simplified MHD model of sympathetic eruptions (Török et al. 2011). Research supported by NASA's Heliophysics Theory and LWS Programs, and NSF/SHINE and NSF/FESD.
NASA Astrophysics Data System (ADS)
Hilburn, Monty D.
Successful lean manufacturing and cellular manufacturing execution relies upon a foundation of leadership commitment and strategic planning built upon solid data and robust analysis. The problem for this study was to create and employ a simple lean transformation planning model and review process that could be used to identify functional support staff resources required to plan and execute lean manufacturing cells within aerospace assembly and manufacturing sites. The lean planning model was developed using available literature for lean manufacturing kaizen best practices and validated through a Delphi panel of lean experts. The resulting model and a standardized review process were used to assess the state of lean transformation planning at five sites of an international aerospace manufacturing and assembly company. The results of the three day, on-site review were compared with baseline plans collected from each of the five sites to determine if there analyzed, with focus on three critical areas of lean planning: the number and type of manufacturing cells identified, the number, type, and duration of planned lean and continuous kaizen events, and the quantity and type of functional staffing resources planned to support the kaizen schedule. Summarized data of the baseline and on-site reviews was analyzed with descriptive statistics. ANOVAs and paired-t tests at 95% significance level were conducted on the means of data sets to determine if null hypotheses related to cell, kaizen event, and support resources could be rejected. The results of the research found significant differences between lean transformation plans developed by site leadership and plans developed utilizing the structured, on-site review process and lean transformation planning model. The null hypothesis that there was no difference between the means of pre-review and on-site cell counts was rejected, as was the null hypothesis that there was no significant difference in kaizen event plans. These factors are critical inputs into the support staffing resources calculation used by the lean planning model. Null hypothesis related to functional support staff resources was rejected for most functional groups, indicating that the baseline site plan inadequately provided for cross-functional staff involvement to support the lean transformation plan. Null hypotheses related to total lean transformation staffing could not be rejected, indicating that while total staffing plans were not significantly different than plans developed during the on-site review and through the use of the lean planning model, the allocation of staffing among various functional groups such as engineering, production, and materials planning was an issue. The on-site review process and simple lean transformation plan developed was determined to be useful in identifying short-comings in lean transformation planning within aerospace manufacturing and assembly sites. It was concluded that the differences uncovered were likely contributing factors affecting the effectiveness of aerospace manufacturing sites' implementation of lean cellular manufacturing.
Natural killer T cell facilitated engraftment of rat skin but not islet xenografts in mice.
Gordon, Ethel J; Kelkar, Vinaya
2009-01-01
We have studied cellular components required for xenograft survival mediated by anti-CD154 monoclonal antibody (mAb) and a transfusion of donor spleen cells and found that the elimination of CD4(+) but not CD8(+) cells significantly improves graft survival. A contribution of other cellular components, such as natural killer (NK) cells and natural killer T (NKT) cells, for costimulation blockade-induced xenograft survival has not been clearly defined. We therefore tested the hypothesis that NK or NKT cells would promote rat islet and skin xenograft acceptance in mice. Lewis rat islets or skin was transplanted into wild type B6 mice or into B6 mice that were Jalpha18(null), CD1(null), or beta2 microglobulin (beta2M)(null) NK 1.1 depleted, or perforin(null). Graft recipients were pretreated with an infusion of donor derived spleen cells and a brief course of anti-CD154 mAb treatments. Additional groups received mAb or cells only. We first observed that the depletion of NK1.1 cells does not significantly interfere with graft survival in C57BL/6 (B6) mice. We used NKT cell deficient B6 mice to test the hypothesis that NKT cells are involved in islet and skin xenograft survival in our model. These mice bear a null mutation in the gene for the Jalpha18 component of the T-cell receptor. The component is uniquely associated with NKT cells. We found no difference in islet xenograft survival between Jalpha18(null) and wild type B6 mice. In contrast, median skin graft survival appeared shorter in Jalpha18(null) recipients. These data imply a role for Jalpha18(+) NKT cells in skin xenograft survival in treated mice. In order to confirm this inference, we tested skin xenograft survival in B6 CD1(null) mice because NKT cells are CD1 restricted. Results of these trials demonstrate that the absence of CD1(+) cells adversely affects rat skin graft survival. An additional assay in beta2M(null) mice demonstrated a requirement for major histocompatibility complex (MHC) class I expression in the graft host, and we demonstrate that CD1 is the requisite MHC component. We further demonstrated that, unlike reports for allograft survival, skin xenograft survival does not require perforin-secreting NK cells. We conclude that MHC class I(+) CD1(+) Jalpha18(+) NKT cells promote the survival of rat skin but not rat islet xenografts. These studies implicate different mechanisms for inducing and maintaining islet vs. skin xenograft survival in mice treated with donor antigen and anti-CD154 mAb, and further indicate a role for NKT cells but not NK cells in skin xenograft survival.
Altered fronto-striatal functions in the Gdi1-null mouse model of X-linked Intellectual Disability.
Morè, Lorenzo; Künnecke, Basil; Yekhlef, Latefa; Bruns, Andreas; Marte, Antonella; Fedele, Ernesto; Bianchi, Veronica; Taverna, Stefano; Gatti, Silvia; D'Adamo, Patrizia
2017-03-06
RAB-GDP dissociation inhibitor 1 (GDI1) loss-of-function mutations are responsible for a form of non-specific X-linked Intellectual Disability (XLID) where the only clinical feature is cognitive impairment. GDI1 patients are impaired in specific aspects of executive functions and conditioned response, which are controlled by fronto-striatal circuitries. Previous molecular and behavioral characterization of the Gdi1-null mouse revealed alterations in the total number/distribution of hippocampal and cortical synaptic vesicles as well as hippocampal short-term synaptic plasticity, and memory deficits. In this study, we employed cognitive protocols with high translational validity to human condition that target the functionality of cortico-striatal circuitry such as attention and stimulus selection ability with progressive degree of complexity. We previously showed that Gdi1-null mice are impaired in some hippocampus-dependent forms of associative learning assessed by aversive procedures. Here, using appetitive-conditioning procedures we further investigated associative learning deficits sustained by the fronto-striatal system. We report that Gdi1-null mice are impaired in attention and associative learning processes, which are a key part of the cognitive impairment observed in XLID patients. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
McCombie, Gregor; Medina-Gomez, Gema; Lelliott, Christopher J; Vidal-Puig, Antonio; Griffin, Julian L
2012-06-18
The peroxisome proliferator-activated receptor-γ coactivators (PGC-1) are transcriptional coactivators with an important role in mitochondrial biogenesis and regulation of genes involved in the electron transport chain and oxidative phosphorylation in oxidative tissues including cardiac tissue. These coactivators are thought to play a key role in the development of obesity, type 2 diabetes and the metabolic syndrome. In this study we have used a combined metabolomic and lipidomic analysis of cardiac tissue from the PGC-1β null mouse to examine the effects of a high fat diet on this organ. Multivariate statistics readily separated tissue from PGC-1β null mice from their wild type controls either in gender specific models or in combined datasets. This was associated with an increase in creatine and a decrease in taurine in the null mouse, and an increase in myristic acid and a reduction in long chain polyunsaturated fatty acids for both genders. The most profound changes were detected by liquid chromatography mass spectrometry analysis of intact lipids with the tissue from the null mouse having a profound increase in a number of triglycerides. The metabolomic and lipodomic changes indicate PGC-1β has a profound influence on cardiac metabolism.
How to talk about protein-level false discovery rates in shotgun proteomics.
The, Matthew; Tasnim, Ayesha; Käll, Lukas
2016-09-01
A frequently sought output from a shotgun proteomics experiment is a list of proteins that we believe to have been present in the analyzed sample before proteolytic digestion. The standard technique to control for errors in such lists is to enforce a preset threshold for the false discovery rate (FDR). Many consider protein-level FDRs a difficult and vague concept, as the measurement entities, spectra, are manifestations of peptides and not proteins. Here, we argue that this confusion is unnecessary and provide a framework on how to think about protein-level FDRs, starting from its basic principle: the null hypothesis. Specifically, we point out that two competing null hypotheses are used concurrently in today's protein inference methods, which has gone unnoticed by many. Using simulations of a shotgun proteomics experiment, we show how confusing one null hypothesis for the other can lead to serious discrepancies in the FDR. Furthermore, we demonstrate how the same simulations can be used to verify FDR estimates of protein inference methods. In particular, we show that, for a simple protein inference method, decoy models can be used to accurately estimate protein-level FDRs for both competing null hypotheses. © 2016 The Authors. Proteomics Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chakraborty, Anirban; Wakamiya, Maki; Venkova-Canova, Tatiana; Pandita, Raj K; Aguilera-Aguirre, Leopoldo; Sarker, Altaf H; Singh, Dharmendra Kumar; Hosoki, Koa; Wood, Thomas G; Sharma, Gulshan; Cardenas, Victor; Sarkar, Partha S; Sur, Sanjiv; Pandita, Tej K; Boldogh, Istvan; Hazra, Tapas K
2015-10-09
Why mammalian cells possess multiple DNA glycosylases (DGs) with overlapping substrate ranges for repairing oxidatively damaged bases via the base excision repair (BER) pathway is a long-standing question. To determine the biological role of these DGs, null animal models have been generated. Here, we report the generation and characterization of mice lacking Neil2 (Nei-like 2). As in mice deficient in each of the other four oxidized base-specific DGs (OGG1, NTH1, NEIL1, and NEIL3), Neil2-null mice show no overt phenotype. However, middle-aged to old Neil2-null mice show the accumulation of oxidative genomic damage, mostly in the transcribed regions. Immuno-pulldown analysis from wild-type (WT) mouse tissue showed the association of NEIL2 with RNA polymerase II, along with Cockayne syndrome group B protein, TFIIH, and other BER proteins. Chromatin immunoprecipitation analysis from mouse tissue showed co-occupancy of NEIL2 and RNA polymerase II only on the transcribed genes, consistent with our earlier in vitro findings on NEIL2's role in transcription-coupled BER. This study provides the first in vivo evidence of genomic region-specific repair in mammals. Furthermore, telomere loss and genomic instability were observed at a higher frequency in embryonic fibroblasts from Neil2-null mice than from the WT. Moreover, Neil2-null mice are much more responsive to inflammatory agents than WT mice. Taken together, our results underscore the importance of NEIL2 in protecting mammals from the development of various pathologies that are linked to genomic instability and/or inflammation. NEIL2 is thus likely to play an important role in long term genomic maintenance, particularly in long-lived mammals such as humans. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Role of Plasmodium vivax Duffy-binding protein 1 in invasion of Duffy-null Africans
Gunalan, Karthigayan; Lo, Eugenia; Hostetler, Jessica B.; Yewhalaw, Delenasaw; Mu, Jianbing; Neafsey, Daniel E.; Yan, Guiyun; Miller, Louis H.
2016-01-01
The ability of the malaria parasite Plasmodium vivax to invade erythrocytes is dependent on the expression of the Duffy blood group antigen on erythrocytes. Consequently, Africans who are null for the Duffy antigen are not susceptible to P. vivax infections. Recently, P. vivax infections in Duffy-null Africans have been documented, raising the possibility that P. vivax, a virulent pathogen in other parts of the world, may expand malarial disease in Africa. P. vivax binds the Duffy blood group antigen through its Duffy-binding protein 1 (DBP1). To determine if mutations in DBP1 resulted in the ability of P. vivax to bind Duffy-null erythrocytes, we analyzed P. vivax parasites obtained from two Duffy-null individuals living in Ethiopia where Duffy-null and -positive Africans live side-by-side. We determined that, although the DBP1s from these parasites contained unique sequences, they failed to bind Duffy-null erythrocytes, indicating that mutations in DBP1 did not account for the ability of P. vivax to infect Duffy-null Africans. However, an unusual DNA expansion of DBP1 (three and eight copies) in the two Duffy-null P. vivax infections suggests that an expansion of DBP1 may have been selected to allow low-affinity binding to another receptor on Duffy-null erythrocytes. Indeed, we show that Salvador (Sal) I P. vivax infects Squirrel monkeys independently of DBP1 binding to Squirrel monkey erythrocytes. We conclude that P. vivax Sal I and perhaps P. vivax in Duffy-null patients may have adapted to use new ligand–receptor pairs for invasion. PMID:27190089
Aoki, Yoshitsugu; Nagata, Tetsuya; Yokota, Toshifumi; Nakamura, Akinori; Wood, Matthew J A; Partridge, Terence; Takeda, Shin'ichi
2013-12-15
Phosphorodiamidate morpholino oligomer (PMO)-mediated exon skipping is among the more promising approaches to the treatment of several neuromuscular disorders including Duchenne muscular dystrophy. The main weakness of this approach arises from the low efficiency and sporadic nature of the delivery of charge-neutral PMO into muscle fibers, the mechanism of which is unknown. In this study, to test our hypothesis that muscle fibers take up PMO more efficiently during myotube formation, we induced synchronous muscle regeneration by injection of cardiotoxin into the tibialis anterior muscle of Dmd exon 52-deficient mdx52 and wild-type mice. Interestingly, by in situ hybridization, we detected PMO mainly in embryonic myosin heavy chain-positive regenerating fibers. In addition, we showed that PMO or 2'-O-methyl phosphorothioate is taken up efficiently into C2C12 myotubes when transfected 24-72 h after the induction of differentiation but is poorly taken up into undifferentiated C2C12 myoblasts suggesting efficient uptake of PMO in the early stages of C2C12 myotube formation. Next, we tested the therapeutic potential of PMO for laminin-α2 chain-null dy(3K)/dy(3K) mice: a model of merosin-deficient congenital muscular dystrophy (MDC1A) with active muscle regeneration. We confirmed the recovery of laminin-α2 chain and slightly prolonged life span following skipping of the mutated exon 4 in dy(3K)/dy(3K) mice. These findings support the idea that PMO entry into fibers is dependent on a developmental stage in myogenesis rather than on dystrophinless muscle membranes and provide a platform for developing PMO-mediated therapies for a variety of muscular disorders, such as MDC1A, that involve active muscle regeneration.
A Comparison of Uniform DIF Effect Size Estimators under the MIMIC and Rasch Models
ERIC Educational Resources Information Center
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D.
2013-01-01
The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…
Unified Dark Matter scalar field models with fast transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertacca, Daniele; Bruni, Marco; Piattella, Oliver F.
2011-02-01
We investigate the general properties of Unified Dark Matter (UDM) scalar field models with Lagrangians with a non-canonical kinetic term, looking specifically for models that can produce a fast transition between an early Einstein-de Sitter CDM-like era and a later Dark Energy like phase, similarly to the barotropic fluid UDM models in JCAP01(2010)014. However, while the background evolution can be very similar in the two cases, the perturbations are naturally adiabatic in fluid models, while in the scalar field case they are necessarily non-adiabatic. The new approach to building UDM Lagrangians proposed here allows to escape the common problem ofmore » the fine-tuning of the parameters which plague many UDM models. We analyse the properties of perturbations in our model, focusing on the the evolution of the effective speed of sound and that of the Jeans length. With this insight, we can set theoretical constraints on the parameters of the model, predicting sufficient conditions for the model to be viable. An interesting feature of our models is that what can be interpreted as w{sub DE} can be < −1 without violating the null energy conditions.« less
Brachvogel, Bent; Zaucke, Frank; Dave, Keyur; Norris, Emma L; Stermann, Jacek; Dayakli, Münire; Koch, Manuel; Gorman, Jeffrey J; Bateman, John F; Wilson, Richard
2013-05-10
Collagen IX is an integral cartilage extracellular matrix component important in skeletal development and joint function. Proteomic analysis and validation studies revealed novel alterations in collagen IX null cartilage. Matrilin-4, collagen XII, thrombospondin-4, fibronectin, βig-h3, and epiphycan are components of the in vivo collagen IX interactome. We applied a proteomics approach to advance our understanding of collagen IX ablation in cartilage. The cartilage extracellular matrix is essential for endochondral bone development and joint function. In addition to the major aggrecan/collagen II framework, the interacting complex of collagen IX, matrilin-3, and cartilage oligomeric matrix protein (COMP) is essential for cartilage matrix stability, as mutations in Col9a1, Col9a2, Col9a3, Comp, and Matn3 genes cause multiple epiphyseal dysplasia, in which patients develop early onset osteoarthritis. In mice, collagen IX ablation results in severely disturbed growth plate organization, hypocellular regions, and abnormal chondrocyte shape. This abnormal differentiation is likely to involve altered cell-matrix interactions but the mechanism is not known. To investigate the molecular basis of the collagen IX null phenotype we analyzed global differences in protein abundance between wild-type and knock-out femoral head cartilage by capillary HPLC tandem mass spectrometry. We identified 297 proteins in 3-day cartilage and 397 proteins in 21-day cartilage. Components that were differentially abundant between wild-type and collagen IX-deficient cartilage included 15 extracellular matrix proteins. Collagen IX ablation was associated with dramatically reduced COMP and matrilin-3, consistent with known interactions. Matrilin-1, matrilin-4, epiphycan, and thrombospondin-4 levels were reduced in collagen IX null cartilage, providing the first in vivo evidence for these proteins belonging to the collagen IX interactome. Thrombospondin-4 expression was reduced at the mRNA level, whereas matrilin-4 was verified as a novel collagen IX-binding protein. Furthermore, changes in TGFβ-induced protein βig-h3 and fibronectin abundance were found in the collagen IX knock-out but not associated with COMP ablation, indicating specific involvement in the abnormal collagen IX null cartilage. In addition, the more widespread expression of collagen XII in the collagen IX-deficient cartilage suggests an attempted compensatory response to the absence of collagen IX. Our differential proteomic analysis of cartilage is a novel approach to identify candidate matrix protein interactions in vivo, underpinning further analysis of mutant cartilage lacking other matrix components or harboring disease-causing mutations.
Replication Unreliability in Psychology: Elusive Phenomena or “Elusive” Statistical Power?
Tressoldi, Patrizio E.
2012-01-01
The focus of this paper is to analyze whether the unreliability of results related to certain controversial psychological phenomena may be a consequence of their low statistical power. Applying the Null Hypothesis Statistical Testing (NHST), still the widest used statistical approach, unreliability derives from the failure to refute the null hypothesis, in particular when exact or quasi-exact replications of experiments are carried out. Taking as example the results of meta-analyses related to four different controversial phenomena, subliminal semantic priming, incubation effect for problem solving, unconscious thought theory, and non-local perception, it was found that, except for semantic priming on categorization, the statistical power to detect the expected effect size (ES) of the typical study, is low or very low. The low power in most studies undermines the use of NHST to study phenomena with moderate or low ESs. We conclude by providing some suggestions on how to increase the statistical power or use different statistical approaches to help discriminate whether the results obtained may or may not be used to support or to refute the reality of a phenomenon with small ES. PMID:22783215
Brownian model of transcriptome evolution and phylogenetic network visualization between tissues.
Gu, Xun; Ruan, Hang; Su, Zhixi; Zou, Yangyun
2017-09-01
While phylogenetic analysis of transcriptomes of the same tissue is usually congruent with the species tree, the controversy emerges when multiple tissues are included, that is, whether species from the same tissue are clustered together, or different tissues from the same species are clustered together. Recent studies have suggested that phylogenetic network approach may shed some lights on our understanding of multi-tissue transcriptome evolution; yet the underlying evolutionary mechanism remains unclear. In this paper we develop a Brownian-based model of transcriptome evolution under the phylogenetic network that can statistically distinguish between the patterns of species-clustering and tissue-clustering. Our model can be used as a null hypothesis (neutral transcriptome evolution) for testing any correlation in tissue evolution, can be applied to cancer transcriptome evolution to study whether two tumors of an individual appeared independently or via metastasis, and can be useful to detect convergent evolution at the transcriptional level. Copyright © 2017. Published by Elsevier Inc.
Wilkinson, Michael
2014-03-01
Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.
SIMULATION OF SUMMER-TIME DIURNAL BACTERIAL DYNAMICS IN THE ATMOSPHERIC SURFACE LAYER
A model was prepared to simulate the observed concentration dynamics of culturable bacteria in the diurnal summer atmosphere at a Willamette River Valley, Oregon location. The meteorological and bacterial mechanisms included in a dynamic null-dimensional model with one-second tim...
Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.
2013-01-01
Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430
Role of adjacency-matrix degeneracy in maximum-entropy-weighted network models
NASA Astrophysics Data System (ADS)
Sagarra, O.; Pérez Vicente, C. J.; Díaz-Guilera, A.
2015-11-01
Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer-valued adjacency matrices constructed from an aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three data sets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring network observables.
Toroidally symmetric plasma vortex at tokamak divertor null point
Umansky, M. V.; Ryutov, D. D.
2016-03-09
Reduced MHD equations are used for studying toroidally symmetric plasma dynamics near the divertor null point. Numerical solution of these equations exhibits a plasma vortex localized at the null point with the time-evolution defined by interplay of the curvature drive, magnetic restoring force, and dissipation. Convective motion is easier to achieve for a second-order null (snowflake) divertor than for a regular x-point configuration, and the size of the convection zone in a snowflake configuration grows with plasma pressure at the null point. In conclusion, the trends in simulations are consistent with tokamak experiments which indicate the presence of enhanced transportmore » at the null point.« less
Some controversial multiple testing problems in regulatory applications.
Hung, H M James; Wang, Sue-Jane
2009-01-01
Multiple testing problems in regulatory applications are often more challenging than the problems of handling a set of mathematical symbols representing multiple null hypotheses under testing. In the union-intersection setting, it is important to define a family of null hypotheses relevant to the clinical questions at issue. The distinction between primary endpoint and secondary endpoint needs to be considered properly in different clinical applications. Without proper consideration, the widely used sequential gate keeping strategies often impose too many logical restrictions to make sense, particularly to deal with the problem of testing multiple doses and multiple endpoints, the problem of testing a composite endpoint and its component endpoints, and the problem of testing superiority and noninferiority in the presence of multiple endpoints. Partitioning the null hypotheses involved in closed testing into clinical relevant orderings or sets can be a viable alternative to resolving the illogical problems requiring more attention from clinical trialists in defining the clinical hypotheses or clinical question(s) at the design stage. In the intersection-union setting there is little room for alleviating the stringency of the requirement that each endpoint must meet the same intended alpha level, unless the parameter space under the null hypothesis can be substantially restricted. Such restriction often requires insurmountable justification and usually cannot be supported by the internal data. Thus, a possible remedial approach to alleviate the possible conservatism as a result of this requirement is a group-sequential design strategy that starts with a conservative sample size planning and then utilizes an alpha spending function to possibly reach the conclusion early.
Guo, Dongsheng; Sarkar, Joy; Ahmed, Mohamed R; Viswakarma, Navin; Jia, Yuzhi; Yu, Songtao; Sambasiva Rao, M; Reddy, Janardan K
2006-08-25
The constitutive androstane receptor (CAR) regulates transcription of phenobarbital-inducible genes that encode xenobiotic-metabolizing enzymes in liver. CAR is localized to the hepatocyte cytoplasm but to be functional, it translocates into the nucleus in the presence of phenobarbital-like CAR ligands. We now demonstrate that adenovirally driven EGFP-CAR, as expected, translocates into the nucleus of normal wild-type hepatocytes following phenobarbital treatment under both in vivo and in vitro conditions. Using this approach we investigated the role of transcription coactivators PBP and PRIP in the translocation of EGFP-CAR into the nucleus of PBP and PRIP liver conditional null mouse hepatocytes. We show that coactivator PBP is essential for nuclear translocation of CAR but not PRIP. Adenoviral expression of both PBP and EGFP-CAR restored phenobarbital-mediated nuclear translocation of exogenously expressed CAR in PBP null livers in vivo and in PBP null primary hepatocytes in vitro. CAR translocation into the nucleus of PRIP null livers resulted in the induction of CAR target genes such as CYP2B10, necessary for the conversion of acetaminophen to its hepatotoxic intermediate metabolite, N-acetyl-p-benzoquinone imine. As a consequence, PRIP-deficiency in liver did not protect from acetaminophen-induced hepatic necrosis, unlike that exerted by PBP deficiency. These results establish that transcription coactivator PBP plays a pivotal role in nuclear localization of CAR, that it is likely that PBP either enhances nuclear import or nuclear retention of CAR in hepatocytes, and that PRIP is redundant for CAR function.
Mandrile, Giorgia; van Woerden, Christiaan S; Berchialla, Paola; Beck, Bodo B; Acquaviva Bourdain, Cécile; Hulton, Sally-Anne; Rumsby, Gill
2014-12-01
Primary hyperoxaluria type 1 displays a heterogeneous phenotype, likely to be affected by genetic and non-genetic factors, including timeliness of diagnosis and quality of care. As previous genotype-phenotype studies were hampered by limited patient numbers the European OxalEurope Consortium was constituted. This preliminary retrospective report is based on 526 patients of which 410 have the AGXT genotype defined. We grouped mutations by the predicted effect as null, missense leading to mistargeting (G170R), and other missense, and analyzed their phenotypic correlations. Median age of end-stage renal disease increased from 9.9 for 88 homozygous null patients, 11.5 for 42 heterozygous null/missense, 16.9 for 116 homozygous missense patients, 25.1 for 61 G170R/null patients, 31.2 for 32 G170R/missense patients, and 33.9 years for 71 homozygous G170R patients. The outcome of some recurrent missense mutations (p.I244T, p.F152I, p.M195R, p.D201E, p.S81L, p.R36C) and an unprecedented number of G170R homozygotes is described in detail. Diagnosis is still delayed and actions aimed at increasing awareness of primary hyperoxaluria type 1 are recommended. Thus, in addition to G170R, other causative mutations are associated with later onset of end-stage renal disease. The OxalEurope registry will provide necessary tools for characterizing those genetic and non-genetic factors through a combination of genetic, functional, and biostatistical approaches.
Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre
2012-01-01
Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961
Complementarity of dark matter searches in the phenomenological MSSM
Cahill-Rowley, Matthew; Cotta, Randy; Drlica-Wagner, Alex; ...
2015-03-11
As is well known, the search for and eventual identification of dark matter in supersymmetry requires a simultaneous, multipronged approach with important roles played by the LHC as well as both direct and indirect dark matter detection experiments. We examine the capabilities of these approaches in the 19-parameter phenomenological MSSM which provides a general framework for complementarity studies of neutralino dark matter. We summarize the sensitivity of dark matter searches at the 7 and 8 (and eventually 14) TeV LHC, combined with those by Fermi, CTA, IceCube/DeepCore, COUPP, LZ and XENON. The strengths and weaknesses of each of these techniques aremore » examined and contrasted and their interdependent roles in covering the model parameter space are discussed in detail. We find that these approaches explore orthogonal territory and that advances in each are necessary to cover the supersymmetric weakly interacting massive particle parameter space. We also find that different experiments have widely varying sensitivities to the various dark matter annihilation mechanisms, some of which would be completely excluded by null results from these experiments.« less
Nitric oxide negatively regulates mammalian adult neurogenesis
NASA Astrophysics Data System (ADS)
Packer, Michael A.; Stasiv, Yuri; Benraiss, Abdellatif; Chmielnicki, Eva; Grinberg, Alexander; Westphal, Heiner; Goldman, Steven A.; Enikolopov, Grigori
2003-08-01
Neural progenitor cells are widespread throughout the adult central nervous system but only give rise to neurons in specific loci. Negative regulators of neurogenesis have therefore been postulated, but none have yet been identified as subserving a significant role in the adult brain. Here we report that nitric oxide (NO) acts as an important negative regulator of cell proliferation in the adult mammalian brain. We used two independent approaches to examine the function of NO in adult neurogenesis. In a pharmacological approach, we suppressed NO production in the rat brain by intraventricular infusion of an NO synthase inhibitor. In a genetic approach, we generated a null mutant neuronal NO synthase knockout mouse line by targeting the exon encoding active center of the enzyme. In both models, the number of new cells generated in neurogenic areas of the adult brain, the olfactory subependyma and the dentate gyrus, was strongly augmented, which indicates that division of neural stem cells in the adult brain is controlled by NO and suggests a strategy for enhancing neurogenesis in the adult central nervous system.
Off-Axis Nulling Transfer Function Measurement: A First Assessment
NASA Technical Reports Server (NTRS)
Vedova, G. Dalla; Menut, J.-L.; Millour, F.; Petrov, R.; Cassaing, F.; Danchi, W. C.; Jacquinod, S.; Lhome, E.; Lopez, B.; Lozi, J.;
2013-01-01
We want to study a polychromatic inverse problem method with nulling interferometers to obtain information on the structures of the exozodiacal light. For this reason, during the first semester of 2013, thanks to the support of the consortium PERSEE, we launched a campaign of laboratory measurements with the nulling interferometric test bench PERSEE, operating with 9 spectral channels between J and K bands. Our objective is to characterise the transfer function, i.e. the map of the null as a function of wavelength for an off-axis source, the null being optimised on the central source or on the source photocenter. We were able to reach on-axis null depths better than 10(exp -4). This work is part of a broader project aiming at creating a simulator of a nulling interferometer in which typical noises of a real instrument are introduced. We present here our first results.
Evaluation of variable selection methods for random forests and omics data sets.
Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke
2017-10-16
Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.
Shen, Xiaomeng; Hu, Qiang; Li, Jun; Wang, Jianmin; Qu, Jun
2015-10-02
Comprehensive and accurate evaluation of data quality and false-positive biomarker discovery is critical to direct the method development/optimization for quantitative proteomics, which nonetheless remains challenging largely due to the high complexity and unique features of proteomic data. Here we describe an experimental null (EN) method to address this need. Because the method experimentally measures the null distribution (either technical or biological replicates) using the same proteomic samples, the same procedures and the same batch as the case-vs-contol experiment, it correctly reflects the collective effects of technical variability (e.g., variation/bias in sample preparation, LC-MS analysis, and data processing) and project-specific features (e.g., characteristics of the proteome and biological variation) on the performances of quantitative analysis. To show a proof of concept, we employed the EN method to assess the quantitative accuracy and precision and the ability to quantify subtle ratio changes between groups using different experimental and data-processing approaches and in various cellular and tissue proteomes. It was found that choices of quantitative features, sample size, experimental design, data-processing strategies, and quality of chromatographic separation can profoundly affect quantitative precision and accuracy of label-free quantification. The EN method was also demonstrated as a practical tool to determine the optimal experimental parameters and rational ratio cutoff for reliable protein quantification in specific proteomic experiments, for example, to identify the necessary number of technical/biological replicates per group that affords sufficient power for discovery. Furthermore, we assessed the ability of EN method to estimate levels of false-positives in the discovery of altered proteins, using two concocted sample sets mimicking proteomic profiling using technical and biological replicates, respectively, where the true-positives/negatives are known and span a wide concentration range. It was observed that the EN method correctly reflects the null distribution in a proteomic system and accurately measures false altered proteins discovery rate (FADR). In summary, the EN method provides a straightforward, practical, and accurate alternative to statistics-based approaches for the development and evaluation of proteomic experiments and can be universally adapted to various types of quantitative techniques.
Tissue-specific roles for sonic hedgehog signaling in establishing thymus and parathyroid organ fate
Bain, Virginia E.; Gordon, Julie; O'Neil, John D.; Ramos, Isaias; Richie, Ellen R.
2016-01-01
The thymus and parathyroids develop from third pharyngeal pouch (3rd pp) endoderm. Our previous studies show that Shh null mice have smaller, aparathyroid primordia in which thymus fate specification extends into the pharynx. SHH signaling is active in both dorsal pouch endoderm and neighboring neural crest (NC) mesenchyme. It is unclear which target tissue of SHH signaling is required for the patterning defects in Shh mutants. Here, we used a genetic approach to ectopically activate or delete the SHH signal transducer Smo in either pp endoderm or NC mesenchyme. Although no manipulation recapitulated the Shh null phenotype, manipulation of SHH signaling in either the endoderm or NC mesenchyme had direct and indirect effects on both cell types during fate specification and organogenesis. SHH pathway activation throughout pouch endoderm activated ectopic Tbx1 expression and partially suppressed the thymus-specific transcription factor Foxn1, identifying Tbx1 as a key target of SHH signaling in the 3rd pp. However, ectopic SHH signaling was insufficient to expand the GCM2-positive parathyroid domain, indicating that multiple inputs, some of which might be independent of SHH signaling, are required for parathyroid fate specification. These data support a model in which SHH signaling plays both positive and negative roles in patterning and organogenesis of the thymus and parathyroids. PMID:27633995
Cao, Xiao-Pei; Han, Dong-Mei; Zhao, Li; Guo, Zi-Kuan; Xiao, Feng-Jun; Zhang, Yi-Kun; Zhang, Xiao-Yan; Wang, Li-Sheng; Wang, Heng-Xiang; Wang, Hua
2016-03-01
Specific and effective therapy for prevention or reversal of bronchiolitis obliterans (BO) is lacking. In this study, we evaluated the therapeutic effect of hepatocyte growth factor (HGF) gene modified mesenchymal stromal cells (MSCs) on BO. A mouse model of experimental BO was established by subcutaneously transplanting the tracheas from C57BL/6 mice into Balb/C recipients, which were then administered saline, Ad-HGF-modified human umbilical cord-MSCs (MSCs-HGF) or Ad-Null-modified MSCs (MSCs-Null). The therapeutic effects of MSCs-Null and MSCs-HGF were evaluated by using fluorescence-activated cell sorting (FACS) for lymphocyte immunophenotype of spleen, enzyme-linked immunosorbent assay (ELISA) and real-time polymerase chain reaction (rt-PCR) for cytokine expression, and histopathological analysis for the transplanted trachea. The histopathologic recovery of allograft tracheas was improved significantly after MSCs-Null and MSCs-HGF treatment and the beneficial effects were particularly observed in MSCs-HGF-treated mice. Furthermore, the allo-transplantation-induced immunophenotype disorders of the spleen, including regulatory T (Treg), T helper (Th)1, Th2 and Th17, were attenuated in both cell-treated groups. MSCs-HGF treatment reduced expression and secretion of inflammation cytokines interferon-gamma (IFN-γ), and increased expression of anti-inflammatory cytokine interleukin (IL)-4 and IL-10. It also decreased the expression level of the profibrosis factor transforming growth factor (TGF)-β. Treatment of BO with HGF gene modified MSCs results in reduction of local inflammation and promotion in recovery of allograft trachea histopathology. These findings might provide an effective therapeutic strategy for BO. Copyright © 2015 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
The disruption of central CO2 chemosensitivity in a mouse model of Rett syndrome
Zhang, Xiaoli; Su, Junda; Cui, Ningren; Gai, Hongyu; Wu, Zhongying
2011-01-01
People with Rett syndrome (RTT) have breathing instability in addition to other neuropathological manifestations. The breathing disturbances contribute to the high incidence of unexplained death and abnormal brain development. However, the cellular mechanisms underlying the breathing abnormalities remain unclear. To test the hypothesis that the central CO2 chemoreception in these people is disrupted, we studied the CO2 chemosensitivity in a mouse model of RTT. The Mecp2-null mice showed a selective loss of their respiratory response to 1–3% CO2 (mild hypercapnia), whereas they displayed more regular breathing in response to 6–9% CO2 (severe hypercapnia). The defect was alleviated with the NE uptake blocker desipramine (10 mg·kg−1·day−1 ip, for 5–7 days). Consistent with the in vivo observations, in vitro studies in brain slices indicated that CO2 chemosensitivity of locus coeruleus (LC) neurons was impaired in Mecp2-null mice. Two major neuronal pH-sensitive Kir currents that resembled homomeric Kir4.1 and heteromeric Ki4.1/Kir5.1 channels were identified in the LC neurons. The screening of Kir channels with real-time PCR indicated the overexpression of Kir4.1 in the LC region of Mecp2-null mice. In a heterologous expression system, an overexpression of Kir4.1 resulted in a reduction in the pH sensitivity of the heteromeric Kir4.1-Kir5.1 channels. Given that Kir4.1 and Kir5.1 subunits are also expressed in brain stem respiration-related areas, the Kir4.1 overexpression may not allow CO2 to be detected until hypercapnia becomes severe, leading to periodical hyper- and hypoventilation in Mecp2-null mice and, perhaps, in people with RTT as well. PMID:21307341
Stevens, Karen E; Choo, Kevin S; Stitzel, Jerry A; Marks, Michael J; Adams, Catherine E
2014-03-13
Perinatal choline supplementation has produced several benefits in rodent models, from improved learning and memory to protection from the behavioral effects of fetal alcohol exposure. We have shown that supplemented choline through gestation and lactation produces long-term improvement in deficient sensory inhibition in DBA/2 mice which models a similar deficit in schizophrenia patients. The present study extends that research by feeding normal or supplemented choline diets to DBA/2 mice carrying the null mutation for the α7 nicotinic receptor gene (Chrna7). DBA/2 mice heterozygotic for Chrna7 were bred together. Dams were placed on supplemented (5 gm/kg diet) or normal (1.1 gm/kg diet) choline at mating and remained on the specific diet until offspring weaning. Thereafter, offspring were fed standard rodent chow. Adult offspring were assessed for sensory inhibition. Brains were obtained to ascertain hippocampal α7 nicotinic receptor levels. Choline-supplemented mice heterozygotic or null-mutant for Chrna7 failed to show improvement in sensory inhibition. Only wildtype choline-supplemented mice showed improvement with the effect solely through a decrease in test amplitude. This supports the hypothesis that gestational-choline supplementation is acting through the α7 nicotinic receptor to improve sensory inhibition. Although there was a significant gene-dose-related change in hippocampal α7 receptor numbers, binding studies did not reveal any choline-dose-related change in binding in any hippocampal region, the interaction being driven by a significant genotype main effect (wildtype>heterozygote>null mutant). These data parallel a human study wherein the offspring of pregnant women receiving choline supplementation during gestation, showed better sensory inhibition than offspring of women on placebo. Published by Elsevier B.V.
Stevens, Karen E.; Choo, Kevin S.; Stitzel, Jerry A.; Marks, Michael J.; Adams, Catherine E.
2014-01-01
Perinatal choline supplementation has produced several benefits in rodent models, from improved learning and memory to protection from the behavioral effects of fetal alcohol exposure. We have shown that supplemented choline through gestation and lactation produces long-term improvement in deficient sensory inhibition in DBA/2 mice which models a similar deficit in schizophrenia patients. The present study extends that research by feeding normal or supplemented choline diets to DBA/2 mice carrying the null mutation for the α7 nicotinic receptor gene (Chrna7). DBA/2 mice heterozygotic for Chrna7 were bred together. Dams were placed on supplemented (5 gm/kg diet) or normal (1.1 gm/kg diet) choline at mating and remained on the specific diet until offspring weaning. Thereafter, offspring were fed standard rodent chow. Adult offspring were assessed for sensory inhibition. Brains were obtained to ascertain hippocampal α7 nicotinic receptor levels. Choline-supplemented mice heterozygotic or null-mutant for Chrna7 failed to show improvement in sensory inhibition. Only wildtype choline-supplemented mice showed improvement with the effect solely through a decrease in test amplitude. This supports the hypothesis that gestational-choline supplementation is acting through the α7 nicotinic receptor to improve sensory inhibition. Although there was a significant gene-dose-related change in hippocampal α7 receptor numbers, binding studies did not reveal any choline-dose-related change in binding in any hippocampal region, the interaction being driven by a significant genotype main effect (wildtype>heterozygote>null mutant). These data parallel a human study wherein the offspring of pregnant women receiving choline supplementation during gestation, showed better sensory inhibition than offspring of women on placebo. PMID:24462939
Loss of Vitamin D Receptor Produces Polyuria by Increasing Thirst
Kong, Juan; Zhang, Zhongyi; Li, Dongdong; Wong, Kari E.; Zhang, Yan; Szeto, Frances L.; Musch, Mark W.; Li, Yan Chun
2008-01-01
Vitamin D receptor (VDR)-null mice develop polyuria, but the underlying mechanism remains unknown. In this study, we investigated the relationship between vitamin D and homeostasis of water and electrolytes. VDR-null mice had polyuria, but the urine osmolarity was normal as a result of high salt excretion. The urinary responses to water restriction and to vasopressin were similar between wild-type and VDR-null mice, suggesting intact fluid-handling capacity in VDR-null mice. Compared with wild-type mice, however, renin and angiotensin II were dramatically upregulated in the kidney and brain of VDR-null mice, leading to a marked increase in water intake and salt appetite. Angiotensin II–mediated upregulation of intestinal NHE3 expression partially explained the increased salt absorption and excretion in VDR-null mice. In the brain of VDR-null mice, expression of c-Fos, which is known to associate with increased water intake, was increased in the hypothalamic paraventricular nucleus and the subfornical organ. Treatment with an angiotensin II type 1 receptor antagonist normalized water intake, urinary volume, and c-Fos expression in VDR-null mice. Furthermore, despite a salt-deficient diet to reduce intestinal salt absorption, VDR-null mice still maintained the increased water intake and urinary output. Together, these data indicate that the polyuria observed in VDR-null mice is not caused by impaired renal fluid handling or increased intestinal salt absorption but rather is the result of increased water intake induced by the increase in systemic and brain angiotensin II. PMID:18832438
Loss of vitamin D receptor produces polyuria by increasing thirst.
Kong, Juan; Zhang, Zhongyi; Li, Dongdong; Wong, Kari E; Zhang, Yan; Szeto, Frances L; Musch, Mark W; Li, Yan Chun
2008-12-01
Vitamin D receptor (VDR)-null mice develop polyuria, but the underlying mechanism remains unknown. In this study, we investigated the relationship between vitamin D and homeostasis of water and electrolytes. VDR-null mice had polyuria, but the urine osmolarity was normal as a result of high salt excretion. The urinary responses to water restriction and to vasopressin were similar between wild-type and VDR-null mice, suggesting intact fluid-handling capacity in VDR-null mice. Compared with wild-type mice, however, renin and angiotensin II were dramatically upregulated in the kidney and brain of VDR-null mice, leading to a marked increase in water intake and salt appetite. Angiotensin II-mediated upregulation of intestinal NHE3 expression partially explained the increased salt absorption and excretion in VDR-null mice. In the brain of VDR-null mice, expression of c-Fos, which is known to associate with increased water intake, was increased in the hypothalamic paraventricular nucleus and the subfornical organ. Treatment with an angiotensin II type 1 receptor antagonist normalized water intake, urinary volume, and c-Fos expression in VDR-null mice. Furthermore, despite a salt-deficient diet to reduce intestinal salt absorption, VDR-null mice still maintained the increased water intake and urinary output. Together, these data indicate that the polyuria observed in VDR-null mice is not caused by impaired renal fluid handling or increased intestinal salt absorption but rather is the result of increased water intake induced by the increase in systemic and brain angiotensin II.
Estimation of mating system parameters in plant populations using marker loci with null alleles.
Ross, H A
1986-06-01
An Expectation-Maximization (EM)-algorithm procedure is presented that extends Cheliak et al. (1983) method of maximum-likelihood estimation of mating system parameters of mixed mating system models. The extension permits the estimation of the rate of self-fertilization (s) and allele frequencies (Pi) at loci in outcrossing pollen, at marker loci having recessive null alleles. The algorithm makes use of maternal and filial genotypic arrays obtained by the electrophoretic analysis of cohorts of progeny. The genotypes of maternal plants must be known. Explicit equations are given for cases when the genotype of the maternal gamete inherited by a seed can (gymnosperms) or cannot (angiosperms) be determined. The procedure can accommodate any number of codominant alleles, but only one recessive null allele at each locus. An example, using actual data from Pinus banksiana, is presented to illustrate the application of this EM algorithm to the estimation of mating system parameters using marker loci having both codominant and recessive alleles.
Hyperactivation of Nrf2 in early tubular development induces nephrogenic diabetes insipidus
Suzuki, Takafumi; Seki, Shiori; Hiramoto, Keiichiro; Naganuma, Eriko; Kobayashi, Eri H.; Yamaoka, Ayaka; Baird, Liam; Takahashi, Nobuyuki; Sato, Hiroshi; Yamamoto, Masayuki
2017-01-01
NF-E2-related factor-2 (Nrf2) regulates cellular responses to oxidative and electrophilic stress. Loss of Keap1 increases Nrf2 protein levels, and Keap1-null mice die of oesophageal hyperkeratosis because of Nrf2 hyperactivation. Here we show that deletion of oesophageal Nrf2 in Keap1-null mice allows survival until adulthood, but the animals develop polyuria with low osmolality and bilateral hydronephrosis. This phenotype is caused by defects in water reabsorption that are the result of reduced aquaporin 2 levels in the kidney. Renal tubular deletion of Keap1 promotes nephrogenic diabetes insipidus features, confirming that Nrf2 activation in developing tubular cells causes a water reabsorption defect. These findings suggest that Nrf2 activity should be tightly controlled during development in order to maintain renal homeostasis. In addition, tissue-specific ablation of Nrf2 in Keap1-null mice might create useful animal models to uncover novel physiological functions of Nrf2. PMID:28233855
Abnormal Mammary Development in 129:STAT1-Null Mice is Stroma-Dependent
Cardiff, Robert D.; Trott, Josephine F.; Hovey, Russell C.; Hubbard, Neil E.; Engelberg, Jesse A.; Tepper, Clifford G.; Willis, Brandon J.; Khan, Imran H.; Ravindran, Resmi K.; Chan, Szeman R.; Schreiber, Robert D.; Borowsky, Alexander D.
2015-01-01
Female 129:Stat1-null mice (129S6/SvEvTac-Stat1tm1Rds homozygous) uniquely develop estrogen-receptor (ER)-positive mammary tumors. Herein we report that the mammary glands (MG) of these mice have altered growth and development with abnormal terminal end buds alongside defective branching morphogenesis and ductal elongation. We also find that the 129:Stat1-null mammary fat pad (MFP) fails to sustain the growth of 129S6/SvEv wild-type and Stat1-null epithelium. These abnormalities are partially reversed by elevated serum progesterone and prolactin whereas transplantation of wild-type bone marrow into 129:Stat1-null mice does not reverse the MG developmental defects. Medium conditioned by 129:Stat1-null epithelium-cleared MFP does not stimulate epithelial proliferation, whereas it is stimulated by medium conditioned by epithelium-cleared MFP from either wild-type or 129:Stat1-null females having elevated progesterone and prolactin. Microarrays and multiplexed cytokine assays reveal that the MG of 129:Stat1-null mice has lower levels of growth factors that have been implicated in normal MG growth and development. Transplanted 129:Stat1-null tumors and their isolated cells also grow slower in 129:Stat1-null MG compared to wild-type recipient MG. These studies demonstrate that growth of normal and neoplastic 129:Stat1-null epithelium is dependent on the hormonal milieu and on factors from the mammary stroma such as cytokines. While the individual or combined effects of these factors remains to be resolved, our data supports the role of STAT1 in maintaining a tumor-suppressive MG microenvironment. PMID:26075897
Tennese, Alysa A; Wevrick, Rachel
2011-03-01
Hypothalamic dysfunction may underlie endocrine abnormalities in Prader-Willi syndrome (PWS), a genetic disorder that features GH deficiency, obesity, and infertility. One of the genes typically inactivated in PWS, MAGEL2, is highly expressed in the hypothalamus. Mice deficient for Magel2 are obese with increased fat mass and decreased lean mass and have blunted circadian rhythm. Here, we demonstrate that Magel2-null mice have abnormalities of hypothalamic endocrine axes that recapitulate phenotypes in PWS. Magel2-null mice had elevated basal corticosterone levels, and although male Magel2-null mice had an intact corticosterone response to restraint and to insulin-induced hypoglycemia, female Magel2-null mice failed to respond to hypoglycemia with increased corticosterone. After insulin-induced hypoglycemia, Magel2-null mice of both sexes became more profoundly hypoglycemic, and female mice were slower to recover euglycemia, suggesting an impaired hypothalamic counterregulatory response. GH insufficiency can produce abnormal body composition, such as that seen in PWS and in Magel2-null mice. Male Magel2-null mice had Igf-I levels similar to control littermates. Female Magel2-null mice had low Igf-I levels and reduced GH release in response to stimulation with ghrelin. Female Magel2-null mice did respond to GHRH, suggesting that their GH deficiency has a hypothalamic rather than pituitary origin. Female Magel2-null mice also had higher serum adiponectin than expected, considering their increased fat mass, and thyroid (T(4)) levels were low. Together, these findings strongly suggest that loss of MAGEL2 contributes to endocrine dysfunction of hypothalamic origin in individuals with PWS.
Survival of glucose phosphate isomerase null somatic cells and germ cells in adult mouse chimaeras
Keighren, Margaret A.; Flockhart, Jean H.
2016-01-01
ABSTRACT The mouse Gpi1 gene encodes the glycolytic enzyme glucose phosphate isomerase. Homozygous Gpi1−/− null mouse embryos die but a previous study showed that some homozygous Gpi1−/− null cells survived when combined with wild-type cells in fetal chimaeras. One adult female Gpi1−/−↔Gpi1c/c chimaera with functional Gpi1−/− null oocytes was also identified in a preliminary study. The aims were to characterise the survival of Gpi1−/− null cells in adult Gpi1−/−↔Gpi1c/c chimaeras and determine if Gpi1−/− null germ cells are functional. Analysis of adult Gpi1−/−↔Gpi1c/c chimaeras with pigment and a reiterated transgenic lineage marker showed that low numbers of homozygous Gpi1−/− null cells could survive in many tissues of adult chimaeras, including oocytes. Breeding experiments confirmed that Gpi1−/− null oocytes in one female Gpi1−/−↔Gpi1c/c chimaera were functional and provided preliminary evidence that one male putative Gpi1−/−↔Gpi1c/c chimaera produced functional spermatozoa from homozygous Gpi1−/− null germ cells. Although the male chimaera was almost certainly Gpi1−/−↔Gpi1c/c, this part of the study is considered preliminary because only blood was typed for GPI. Gpi1−/− null germ cells should survive in a chimaeric testis if they are supported by wild-type Sertoli cells. It is also feasible that spermatozoa could bypass a block at GPI, but not blocks at some later steps in glycolysis, by using fructose, rather than glucose, as the substrate for glycolysis. Although chimaera analysis proved inefficient for studying the fate of Gpi1−/− null germ cells, it successfully identified functional Gpi1−/− null oocytes and revealed that some Gpi1−/− null cells could survive in many adult tissues. PMID:27103217
Kiss, Alexi; Koppel, Aaron C; Anders, Joanna; Cataisson, Christophe; Yuspa, Stuart H; Blumenberg, Miroslav; Efimova, Tatiana
2016-05-01
p38δ expression and/or activity are increased in human cutaneous malignancies, including invasive squamous cell carcinoma (SCC) and head and neck SCC, but the role of p38δ in cutaneous carcinogenesis has not been well-defined. We have reported that mice with germline loss of p38δ exhibited a reduced susceptibility to skin tumor development compared with wild-type mice in the two-stage 7,12-dimethylbenz(a)anthracene (DMBA)/12-O-tetradecanoylphorbol-13-acetate (TPA) chemical skin carcinogenesis model. Here, we report that p38δ gene ablation inhibited the growth of tumors generated from v-ras(Ha) -transformed keratinocytes in skin orthografts to nude mice, indicating that keratinocyte-intrinsic p38δ is required for Ras-induced tumorigenesis. Gene expression profiling of v-ras(Ha) -transformed p38δ-null keratinocytes revealed transcriptional changes associated with cellular responses linked to tumor suppression, such as reduced proliferation and increased differentiation, cell adhesion, and cell communications. Notably, a short-term DMBA/TPA challenge, modeling the initial stages of chemical skin carcinogenesis treatment, elicited an enhanced inflammation in p38δ-null skin compared with skin of wild-type mice, as assessed by measuring the expression of pro-inflammatory cytokines, including IL-1β, IL-6, IL-17, and TNFα. Additionally, p38δ-null skin and p38δ-null keratinocytes exhibited increased p38α activation and signaling in response to acute inflammatory challenges, suggesting a role for p38α in stimulating the elevated inflammatory response in p38δ-null skin during the initial phases of the DMBA/TPA treatment compared with similarly treated p38δ(+/+) skin. Altogether, our results indicate that p38δ signaling regulates skin carcinogenesis not only by keratinocyte cell-autonomous mechanisms, but also by influencing the interaction between between the epithelial compartment of the developing skin tumor and its stromal microenvironment. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernuzzi, Sebastiano; Nagar, Alessandro; Zenginoglu, Anil
2011-10-15
We compute and analyze the gravitational waveform emitted to future null infinity by a system of two black holes in the large-mass-ratio limit. We consider the transition from the quasiadiabatic inspiral to plunge, merger, and ringdown. The relative dynamics is driven by a leading order in the mass ratio, 5PN-resummed, effective-one-body (EOB), analytic-radiation reaction. To compute the waveforms, we solve the Regge-Wheeler-Zerilli equations in the time-domain on a spacelike foliation, which coincides with the standard Schwarzschild foliation in the region including the motion of the small black hole, and is globally hyperboloidal, allowing us to include future null infinity inmore » the computational domain by compactification. This method is called the hyperboloidal layer method, and is discussed here for the first time in a study of the gravitational radiation emitted by black hole binaries. We consider binaries characterized by five mass ratios, {nu}=10{sup -2,-3,-4,-5,-6}, that are primary targets of space-based or third-generation gravitational wave detectors. We show significative phase differences between finite-radius and null-infinity waveforms. We test, in our context, the reliability of the extrapolation procedure routinely applied to numerical relativity waveforms. We present an updated calculation of the final and maximum gravitational recoil imparted to the merger remnant by the gravitational wave emission, v{sub kick}{sup end}/(c{nu}{sup 2})=0.04474{+-}0.00007 and v{sub kick}{sup max}/(c{nu}{sup 2})=0.05248{+-}0.00008. As a self-consistency test of the method, we show an excellent fractional agreement (even during the plunge) between the 5PN EOB-resummed mechanical angular momentum loss and the gravitational wave angular momentum flux computed at null infinity. New results concerning the radiation emitted from unstable circular orbits are also presented. The high accuracy waveforms computed here could be considered for the construction of template banks or for calibrating analytic models such as the effective-one-body model.« less
Arbour, J H; López-Fernández, H
2014-11-01
Morphological, lineage and ecological diversity can vary substantially even among closely related lineages. Factors that influence morphological diversification, especially in functionally relevant traits, can help to explain the modern distribution of disparity across phylogenies and communities. Multivariate axes of feeding functional morphology from 75 species of Neotropical cichlid and a stepwise-AIC algorithm were used to estimate the adaptive landscape of functional morphospace in Cichlinae. Adaptive landscape complexity and convergence, as well as the functional diversity of Cichlinae, were compared with expectations under null evolutionary models. Neotropical cichlid feeding function varied primarily between traits associated with ram feeding vs. suction feeding/biting and secondarily with oral jaw muscle size and pharyngeal crushing capacity. The number of changes in selective regimes and the amount of convergence between lineages was higher than expected under a null model of evolution, but convergence was not higher than expected under a similarly complex adaptive landscape. Functional disparity was compatible with an adaptive landscape model, whereas the distribution of evolutionary change through morphospace corresponded with a process of evolution towards a single adaptive peak. The continentally distributed Neotropical cichlids have evolved relatively rapidly towards a number of adaptive peaks in functional trait space. Selection in Cichlinae functional morphospace is more complex than expected under null evolutionary models. The complexity of selective constraints in feeding morphology has likely been a significant contributor to the diversity of feeding ecology in this clade. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming
2014-11-01
We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.
Pfeiffer, R M; Riedl, R
2015-08-15
We assess the asymptotic bias of estimates of exposure effects conditional on covariates when summary scores of confounders, instead of the confounders themselves, are used to analyze observational data. First, we study regression models for cohort data that are adjusted for summary scores. Second, we derive the asymptotic bias for case-control studies when cases and controls are matched on a summary score, and then analyzed either using conditional logistic regression or by unconditional logistic regression adjusted for the summary score. Two scores, the propensity score (PS) and the disease risk score (DRS) are studied in detail. For cohort analysis, when regression models are adjusted for the PS, the estimated conditional treatment effect is unbiased only for linear models, or at the null for non-linear models. Adjustment of cohort data for DRS yields unbiased estimates only for linear regression; all other estimates of exposure effects are biased. Matching cases and controls on DRS and analyzing them using conditional logistic regression yields unbiased estimates of exposure effect, whereas adjusting for the DRS in unconditional logistic regression yields biased estimates, even under the null hypothesis of no association. Matching cases and controls on the PS yield unbiased estimates only under the null for both conditional and unconditional logistic regression, adjusted for the PS. We study the bias for various confounding scenarios and compare our asymptotic results with those from simulations with limited sample sizes. To create realistic correlations among multiple confounders, we also based simulations on a real dataset. Copyright © 2015 John Wiley & Sons, Ltd.
Alignment of optical system components using an ADM beam through a null assembly
NASA Technical Reports Server (NTRS)
Hayden, Joseph E. (Inventor); Olczak, Eugene G. (Inventor)
2010-01-01
A system for testing an optical surface includes a rangefinder configured to emit a light beam and a null assembly located between the rangefinder and the optical surface. The null assembly is configured to receive and to reflect the emitted light beam toward the optical surface. The light beam reflected from the null assembly is further reflected back from the optical surface toward the null assembly as a return light beam. The rangefinder is configured to measure a distance to the optical surface using the return light beam.
The dev Operon Regulates the Timing of Sporulation during Myxococcus xanthus Development.
Rajagopalan, Ramya; Kroos, Lee
2017-05-15
Myxococcus xanthus undergoes multicellular development when starved. Thousands of rod-shaped cells coordinate their movements and aggregate into mounds in which cells differentiate into spores. Mutations in the dev operon impair development. The dev operon encompasses a clustered regularly interspaced short palindromic repeat-associated (CRISPR-Cas) system. Null mutations in devI , a small gene at the beginning of the dev operon, suppress the developmental defects caused by null mutations in the downstream devR and devS genes but failed to suppress defects caused by a small in-frame deletion in devT We provide evidence that the original mutant has a second-site mutation. We show that devT null mutants exhibit developmental defects indistinguishable from devR and devS null mutants, and a null mutation in devI suppresses the defects of a devT null mutation. The similarity of DevTRS proteins to components of the CRISPR-associated complex for antiviral defense (Cascade), together with our molecular characterization of dev mutants, support a model in which DevTRS form a Cascade-like subcomplex that negatively autoregulates dev transcript accumulation and prevents DevI overproduction that would strongly inhibit sporulation. Our results also suggest that DevI transiently inhibits sporulation when regulated normally. The mechanism of transient inhibition may involve MrpC, a key transcription factor, whose translation appears to be weakly inhibited by DevI. Finally, our characterization of a devI devS mutant indicates that very little exo transcript is required for sporulation, which is surprising since Exo proteins help form the polysaccharide spore coat. IMPORTANCE CRISPR-Cas systems typically function as adaptive immune systems in bacteria. The dev CRISPR-Cas system of M. xanthus has been proposed to prevent bacteriophage infection during development, but how dev controls sporulation has been elusive. Recent evidence supported a model in which DevR and DevS prevent overproduction of DevI, a predicted 40-residue inhibitor of sporulation. We provide genetic evidence that DevT functions together with DevR and DevS to prevent DevI overproduction. We also show that spores form about 6 h earlier in mutants lacking devI than in the wild type. Only a minority of natural isolates appear to have a functional dev promoter and devI , suggesting that a functional dev CRISPR-Cas system evolved recently in niches where delayed sporulation and/or protection from bacteriophage infection proved advantageous. Copyright © 2017 American Society for Microbiology.
The dev Operon Regulates the Timing of Sporulation during Myxococcus xanthus Development
Rajagopalan, Ramya
2017-01-01
ABSTRACT Myxococcus xanthus undergoes multicellular development when starved. Thousands of rod-shaped cells coordinate their movements and aggregate into mounds in which cells differentiate into spores. Mutations in the dev operon impair development. The dev operon encompasses a clustered regularly interspaced short palindromic repeat-associated (CRISPR-Cas) system. Null mutations in devI, a small gene at the beginning of the dev operon, suppress the developmental defects caused by null mutations in the downstream devR and devS genes but failed to suppress defects caused by a small in-frame deletion in devT. We provide evidence that the original mutant has a second-site mutation. We show that devT null mutants exhibit developmental defects indistinguishable from devR and devS null mutants, and a null mutation in devI suppresses the defects of a devT null mutation. The similarity of DevTRS proteins to components of the CRISPR-associated complex for antiviral defense (Cascade), together with our molecular characterization of dev mutants, support a model in which DevTRS form a Cascade-like subcomplex that negatively autoregulates dev transcript accumulation and prevents DevI overproduction that would strongly inhibit sporulation. Our results also suggest that DevI transiently inhibits sporulation when regulated normally. The mechanism of transient inhibition may involve MrpC, a key transcription factor, whose translation appears to be weakly inhibited by DevI. Finally, our characterization of a devI devS mutant indicates that very little exo transcript is required for sporulation, which is surprising since Exo proteins help form the polysaccharide spore coat. IMPORTANCE CRISPR-Cas systems typically function as adaptive immune systems in bacteria. The dev CRISPR-Cas system of M. xanthus has been proposed to prevent bacteriophage infection during development, but how dev controls sporulation has been elusive. Recent evidence supported a model in which DevR and DevS prevent overproduction of DevI, a predicted 40-residue inhibitor of sporulation. We provide genetic evidence that DevT functions together with DevR and DevS to prevent DevI overproduction. We also show that spores form about 6 h earlier in mutants lacking devI than in the wild type. Only a minority of natural isolates appear to have a functional dev promoter and devI, suggesting that a functional dev CRISPR-Cas system evolved recently in niches where delayed sporulation and/or protection from bacteriophage infection proved advantageous. PMID:28264995
Lee, Seong Min; Pike, J Wesley
2016-11-01
The vitamin D receptor (VDR) is a critical mediator of the biological actions of 1,25-dihydroxyvitamin D 3 (1,25(OH) 2 D 3 ). As a nuclear receptor, ligand activation of the VDR leads to the protein's binding to specific sites on the genome that results in the modulation of target gene expression. The VDR is also known to play a role in the hair cycle, an action that appears to be 1,25(OH) 2 D 3 -independent. Indeed, in the absence of the VDR as in hereditary 1,25-dihydroxyvitamin D resistant rickets (HVDRR) both skin defects and alopecia emerge. Recently, we generated a mouse model of HVDRR without alopecia wherein a mutant human VDR lacking 1,25(OH) 2 D 3 -binding activity was expressed in the absence of endogenous mouse VDR. While 1,25(OH) 2 D 3 failed to induce gene expression in these mice, resulting in an extensive skeletal phenotype, the receptor was capable of restoring normal hair cycling. We also noted a level of secondary hyperparathyroidism that was much higher than that seen in the VDR null mouse and was associated with an exaggerated bone phenotype as well. This suggested that the VDR might play a role in parathyroid hormone (PTH) regulation independent of 1,25(OH) 2 D 3 . To evaluate this hypothesis further, we contrasted PTH levels in the HVDRR mouse model with those seen in Cyp27b1 null mice where the VDR was present but the hormone was absent. The data revealed that PTH was indeed higher in Cyp27b1 null mice compared to VDR null mice. To evaluate the mechanism of action underlying such a hypothesis, we measured the expression levels of a number of VDR target genes in the duodena of wildtype mice and in transgenic mice expressing either normal or hormone-binding deficient mutant VDRs. We also compared expression levels of these genes between VDR null mice and Cyp27b1 null mice. In a subset of cases, the expression of VDR target genes was lower in mice containing the VDR as opposed to mice that did not. We suggest that the VDR may function as a selective suppressor/de-repressor of gene expression in the absence of 1,25(OH) 2 D 3 . Copyright © 2015 Elsevier Ltd. All rights reserved.
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.
Szucs, Denes; Ioannidis, John P A
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment
Szucs, Denes; Ioannidis, John P. A.
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397
Silva, Adão; Gameiro, Atílio
2014-01-01
We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90%) for a moderate number of active users. PMID:24574928
A phylogenetic community approach for studying termite communities in a West African savannah.
Hausberger, Barbara; Korb, Judith
2015-10-01
Termites play fundamental roles in tropical ecosystems, and mound-building species in particular are crucial in enhancing species diversity, from plants to mammals. However, it is still unclear which factors govern the occurrence and assembly of termite communities. A phylogenetic community approach and null models of species assembly were used to examine structuring processes associated with termite community assembly in a pristine savannah. Overall, we did not find evidence for a strong influence of interspecific competition or environmental filtering in structuring these communities. However, the presence of a single species, the mound-building termite Macrotermes bellicosus, left a strong signal on structuring and led to clustered communities of more closely related species. Hence, this species changes the assembly rules for a whole community. Our results show the fundamental importance of a single insect species for community processes, suggesting that more attention to insect species is warranted when developing conservation strategies. © 2015 The Author(s).
Early Warning Signals of Ecological Transitions: Methods for Spatial Patterns
Brock, William A.; Carpenter, Stephen R.; Ellison, Aaron M.; Livina, Valerie N.; Seekell, David A.; Scheffer, Marten; van Nes, Egbert H.; Dakos, Vasilis
2014-01-01
A number of ecosystems can exhibit abrupt shifts between alternative stable states. Because of their important ecological and economic consequences, recent research has focused on devising early warning signals for anticipating such abrupt ecological transitions. In particular, theoretical studies show that changes in spatial characteristics of the system could provide early warnings of approaching transitions. However, the empirical validation of these indicators lag behind their theoretical developments. Here, we summarize a range of currently available spatial early warning signals, suggest potential null models to interpret their trends, and apply them to three simulated spatial data sets of systems undergoing an abrupt transition. In addition to providing a step-by-step methodology for applying these signals to spatial data sets, we propose a statistical toolbox that may be used to help detect approaching transitions in a wide range of spatial data. We hope that our methodology together with the computer codes will stimulate the application and testing of spatial early warning signals on real spatial data. PMID:24658137
Timóteo, Sérgio; Correia, Marta; Rodríguez-Echeverría, Susana; Freitas, Helena; Heleno, Ruben
2018-01-10
Species interaction networks are traditionally explored as discrete entities with well-defined spatial borders, an oversimplification likely impairing their applicability. Using a multilayer network approach, explicitly accounting for inter-habitat connectivity, we investigate the spatial structure of seed-dispersal networks across the Gorongosa National Park, Mozambique. We show that the overall seed-dispersal network is composed by spatially explicit communities of dispersers spanning across habitats, functionally linking the landscape mosaic. Inter-habitat connectivity determines spatial structure, which cannot be accurately described with standard monolayer approaches either splitting or merging habitats. Multilayer modularity cannot be predicted by null models randomizing either interactions within each habitat or those linking habitats; however, as habitat connectivity increases, random processes become more important for overall structure. The importance of dispersers for the overall network structure is captured by multilayer versatility but not by standard metrics. Highly versatile species disperse many plant species across multiple habitats, being critical to landscape functional cohesion.
Abu-Amero, Khaled K; Al-Boudari, Olayan M; Mohamed, Gamal H; Dzimiri, Nduna
2006-01-01
Background The association of the deletion in GSTT1 and GSTM1 genes with coronary artery disease (CAD) among smokers is controversial. In addition, no such investigation has previously been conducted among Arabs. Methods We genotyped 1054 CAD patients and 762 controls for GSTT1 and GSTM1 deletion by multiplex polymerase chain reaction. Both CAD and controls were Saudi Arabs. Results In the control group (n = 762), 82.3% had the T wild M wildgenotype, 9% had the Twild M null, 2.4% had the Tnull M wild and 6.3% had the Tnull M null genotype. Among the CAD group (n = 1054), 29.5% had the Twild M wild genotype, 26.6% (p < .001) had the Twild M null, 8.3% (p < .001) had the Tnull M wild and 35.6% (p < .001) had the Tnull M null genotype, indicating a significant association of the Twild M null, Tnull M wild and Tnull M null genotypes with CAD. Univariate analysis also showed that smoking, age, hypercholesterolemia and hypertriglyceridemia, diabetes mellitus, family history of CAD, hypertension and obesity are all associated with CAD, whereas gender and myocardial infarction are not. Binary logistic regression for smoking and genotypes indicated that only M null and Tnullare interacting with smoking. However, further subgroup analysis stratifying the data by smoking status suggested that genotype-smoking interactions have no effect on the development of CAD. Conclusion GSTT1 and GSTM1 null-genotypes are risk factor for CAD independent of genotype-smoking interaction. PMID:16620396
Reyes, Nicholas L.; Banks, Glen B.; Tsang, Mark; Margineantu, Daciana; Gu, Haiwei; Djukovic, Danijel; Chan, Jacky; Torres, Michelle; Liggitt, H. Denny; Hirenallur-S, Dinesh K.; Hockenbery, David M.; Raftery, Daniel; Iritani, Brian M.
2015-01-01
Mammalian skeletal muscle is broadly characterized by the presence of two distinct categories of muscle fibers called type I “red” slow twitch and type II “white” fast twitch, which display marked differences in contraction strength, metabolic strategies, and susceptibility to fatigue. The relative representation of each fiber type can have major influences on susceptibility to obesity, diabetes, and muscular dystrophies. However, the molecular factors controlling fiber type specification remain incompletely defined. In this study, we describe the control of fiber type specification and susceptibility to metabolic disease by folliculin interacting protein-1 (Fnip1). Using Fnip1 null mice, we found that loss of Fnip1 increased the representation of type I fibers characterized by increased myoglobin, slow twitch markers [myosin heavy chain 7 (MyH7), succinate dehydrogenase, troponin I 1, troponin C1, troponin T1], capillary density, and mitochondria number. Cultured Fnip1-null muscle fibers had higher oxidative capacity, and isolated Fnip1-null skeletal muscles were more resistant to postcontraction fatigue relative to WT skeletal muscles. Biochemical analyses revealed increased activation of the metabolic sensor AMP kinase (AMPK), and increased expression of the AMPK-target and transcriptional coactivator PGC1α in Fnip1 null skeletal muscle. Genetic disruption of PGC1α rescued normal levels of type I fiber markers MyH7 and myoglobin in Fnip1-null mice. Remarkably, loss of Fnip1 profoundly mitigated muscle damage in a murine model of Duchenne muscular dystrophy. These results indicate that Fnip1 controls skeletal muscle fiber type specification and warrant further study to determine whether inhibition of Fnip1 has therapeutic potential in muscular dystrophy diseases. PMID:25548157
Generation of Esr1-Knockout Rats Using Zinc Finger Nuclease-Mediated Genome Editing
Dhakal, Pramod; Kubota, Kaiyu; Chakraborty, Damayanti; Lei, Tianhua; Larson, Melissa A.; Wolfe, Michael W.; Roby, Katherine F.; Vivian, Jay L.
2014-01-01
Estrogens play pivotal roles in development and function of many organ systems, including the reproductive system. We have generated estrogen receptor 1 (Esr1)-knockout rats using zinc finger nuclease (ZFN) genome targeting. mRNAs encoding ZFNs targeted to exon 3 of Esr1 were microinjected into single-cell rat embryos and transferred to pseudopregnant recipients. Of 17 live births, 5 had biallelic and 1 had monoallelic Esr1 mutations. A founder with monoallelic mutations was backcrossed to a wild-type rat. Offspring possessed only wild-type Esr1 alleles or wild-type alleles and Esr1 alleles containing either 482 bp (Δ482) or 223 bp (Δ223) deletions, indicating mosaicism in the founder. These heterozygous mutants were bred for colony expansion, generation of homozygous mutants, and phenotypic characterization. The Δ482 Esr1 allele yielded altered transcript processing, including the absence of exon 3, aberrant splicing of exon 2 and 4, and a frameshift that generated premature stop codons located immediately after the codon for Thr157. ESR1 protein was not detected in homozygous Δ482 mutant uteri. ESR1 disruption affected sexually dimorphic postnatal growth patterns and serum levels of gonadotropins and sex steroid hormones. Both male and female Esr1-null rats were infertile. Esr1-null males had small testes with distended and dysplastic seminiferous tubules, whereas Esr1-null females possessed large polycystic ovaries, thread-like uteri, and poorly developed mammary glands. In addition, uteri of Esr1-null rats did not effectively respond to 17β-estradiol treatment, further demonstrating that the Δ482 Esr1 mutation created a null allele. This rat model provides a new experimental tool for investigating the pathophysiology of estrogen action. PMID:24506075
Generation of Esr1-knockout rats using zinc finger nuclease-mediated genome editing.
Rumi, M A Karim; Dhakal, Pramod; Kubota, Kaiyu; Chakraborty, Damayanti; Lei, Tianhua; Larson, Melissa A; Wolfe, Michael W; Roby, Katherine F; Vivian, Jay L; Soares, Michael J
2014-05-01
Estrogens play pivotal roles in development and function of many organ systems, including the reproductive system. We have generated estrogen receptor 1 (Esr1)-knockout rats using zinc finger nuclease (ZFN) genome targeting. mRNAs encoding ZFNs targeted to exon 3 of Esr1 were microinjected into single-cell rat embryos and transferred to pseudopregnant recipients. Of 17 live births, 5 had biallelic and 1 had monoallelic Esr1 mutations. A founder with monoallelic mutations was backcrossed to a wild-type rat. Offspring possessed only wild-type Esr1 alleles or wild-type alleles and Esr1 alleles containing either 482 bp (Δ482) or 223 bp (Δ223) deletions, indicating mosaicism in the founder. These heterozygous mutants were bred for colony expansion, generation of homozygous mutants, and phenotypic characterization. The Δ482 Esr1 allele yielded altered transcript processing, including the absence of exon 3, aberrant splicing of exon 2 and 4, and a frameshift that generated premature stop codons located immediately after the codon for Thr157. ESR1 protein was not detected in homozygous Δ482 mutant uteri. ESR1 disruption affected sexually dimorphic postnatal growth patterns and serum levels of gonadotropins and sex steroid hormones. Both male and female Esr1-null rats were infertile. Esr1-null males had small testes with distended and dysplastic seminiferous tubules, whereas Esr1-null females possessed large polycystic ovaries, thread-like uteri, and poorly developed mammary glands. In addition, uteri of Esr1-null rats did not effectively respond to 17β-estradiol treatment, further demonstrating that the Δ482 Esr1 mutation created a null allele. This rat model provides a new experimental tool for investigating the pathophysiology of estrogen action.
Temporal and regional alterations in NMDA receptor expression in Mecp2-null mice
Blue, Mary E.; Kaufmann, Walter E.; Bressler, Joseph; Eyring, Charlotte; O’Driscoll, Cliona; Naidu, SakkuBai; Johnston, Michael V.
2014-01-01
Our previous postmortem study of girls with Rett Syndrome (RTT), a development disorder caused by MECP2 mutations, found increases in the density of NMDA receptors in the prefrontal cortex of 2–8 year-old girls, while girls older than 10 years had reductions in NMDA receptors compared to age matched controls (Blue et al., 1999b). Using [3H]-CGP to label NMDA type glutamate receptors in 2 and 7 week old wildtype (WT), Mecp2-null and Mecp2-heterozygous (HET) mice (Bird model), we found that frontal areas of the brain also exhibited a bimodal pattern in NMDA expression, with increased densities of NMDA receptors in Mecp2-null mice at 2 weeks of age, but decreased densities at 7 weeks of age. Visual cortex showed a similar pattern, while other cortical regions only exhibited changes in NMDA receptor densities at 2 weeks (retrosplenial granular) or 7 weeks (somatosensory). In thalamus of null mice, NMDA receptors were increased at 2 and 7 weeks. No significant differences in density were found between HET and WT mice at both ages. Western blots for NMDAR1 expression in frontal brain showed higher levels of expression in Mecp2-null mice at two weeks of age, but not at 1 or 7 weeks of age. Our mouse data support the notion that deficient MeCP2 function is the primary cause of the NMDA receptor changes we observed in RTT. Furthermore, the findings of regional and temporal differences in NMDA expression illustrate the importance of age and brain region in evaluating different genotypes of mice. PMID:21901842
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yu-Kun Jennifer; Yeager, Ronnie L.; Tanaka, Yuji
Oxidative stress has been proposed as an important promoter of the progression of fatty liver diseases. The current study investigates the potential functions of the Nrf2-Keap1 signaling pathway, an important hepatic oxidative stress sensor, in a rodent fatty liver model. Mice with no (Nrf2-null), normal (wild type, WT), and enhanced (Keap1 knockdown, K1-kd) expression of Nrf2 were fed a methionine- and choline-deficient (MCD) diet or a control diet for 5 days. Compared to WT mice, the MCD diet-caused hepatosteatosis was more severe in the Nrf2-null mice and less in the K1-kd mice. The Nrf2-null mice had lower hepatic glutathione andmore » exhibited more lipid peroxidation, whereas the K1-kd mice had the highest amount of glutathione in the liver and developed the least lipid peroxidation among the three genotypes fed the MCD diet. The Nrf2 signaling pathway was activated by the MCD diet, and the Nrf2-targeted cytoprotective genes Nqo1 and Gst{alpha}1/2 were induced in WT and even more in K1-kd mice. In addition, Nrf2-null mice on both control and MCD diets exhibited altered expression profiles of fatty acid metabolism genes, indicating Nrf2 may influence lipid metabolism in liver. For example, mRNA levels of long chain fatty acid translocase CD36 and the endocrine hormone Fgf21 were higher in livers of Nrf2-null mice and lower in the K1-kd mice than WT mice fed the MCD diet. Taken together, these observations indicate that Nrf2 could decelerate the onset of fatty livers caused by the MCD diet by increasing hepatic antioxidant and detoxification capabilities.« less
Coffee, R. Lane; Williamson, Ashley J.; Adkins, Christopher M.; Gray, Marisa C.; Page, Terry L.; Broadie, Kendal
2012-01-01
Fragile X syndrome (FXS), caused by loss of the Fragile X Mental Retardation 1 (FMR1) gene product (FMRP), is the most common heritable cause of intellectual disability and autism spectrum disorders. It has been long hypothesized that the phosphorylation of serine 500 (S500) in human FMRP controls its function as an RNA-binding translational repressor. To test this hypothesis in vivo, we employed neuronally targeted expression of three human FMR1 transgenes, including wild-type (hFMR1), dephosphomimetic (S500A-hFMR1) and phosphomimetic (S500D-hFMR1), in the Drosophila FXS disease model to investigate phosphorylation requirements. At the molecular level, dfmr1 null mutants exhibit elevated brain protein levels due to loss of translational repressor activity. This defect is rescued for an individual target protein and across the population of brain proteins by the phosphomimetic, whereas the dephosphomimetic phenocopies the null condition. At the cellular level, dfmr1 null synapse architecture exhibits increased area, branching and bouton number. The phosphomimetic fully rescues these synaptogenesis defects, whereas the dephosphomimetic provides no rescue. The presence of Futsch-positive (microtubule-associated protein 1B) supernumerary microtubule loops is elevated in dfmr1 null synapses. The human phosphomimetic restores normal Futsch loops, whereas the dephosphomimetic provides no activity. At the behavioral level, dfmr1 null mutants exhibit strongly impaired olfactory associative learning. The human phosphomimetic targeted only to the brain-learning center restores normal learning ability, whereas the dephosphomimetic provides absolutely no rescue. We conclude that human FMRP S500 phosphorylation is necessary for its in vivo function as a neuronal translational repressor and regulator of synaptic architecture, and for the manifestation of FMRP-dependent learning behavior. PMID:22080836
The Inhibitory G Protein α-Subunit, Gαz, Promotes Type 1 Diabetes-Like Pathophysiology in NOD Mice.
Fenske, Rachel J; Cadena, Mark T; Harenda, Quincy E; Wienkes, Haley N; Carbajal, Kathryn; Schaid, Michael D; Laundre, Erin; Brill, Allison L; Truchan, Nathan A; Brar, Harpreet; Wisinski, Jaclyn; Cai, Jinjin; Graham, Timothy E; Engin, Feyza; Kimple, Michelle E
2017-06-01
The α-subunit of the heterotrimeric Gz protein, Gαz, promotes β-cell death and inhibits β-cell replication when pancreatic islets are challenged by stressors. Thus, we hypothesized that loss of Gαz protein would preserve functional β-cell mass in the nonobese diabetic (NOD) model, protecting from overt diabetes. We saw that protection from diabetes was robust and durable up to 35 weeks of age in Gαz knockout mice. By 17 weeks of age, Gαz-null NOD mice had significantly higher diabetes-free survival than wild-type littermates. Islets from these mice had reduced markers of proinflammatory immune cell infiltration on both the histological and transcript levels and secreted more insulin in response to glucose. Further analyses of pancreas sections revealed significantly fewer terminal deoxynucleotidyltransferase-mediated dUTP nick end labeling (TUNEL)-positive β-cells in Gαz-null islets despite similar immune infiltration in control mice. Islets from Gαz-null mice also exhibited a higher percentage of Ki-67-positive β-cells, a measure of proliferation, even in the presence of immune infiltration. Finally, β-cell-specific Gαz-null mice phenocopy whole-body Gαz-null mice in their protection from developing hyperglycemia after streptozotocin administration, supporting a β-cell-centric role for Gαz in diabetes pathophysiology. We propose that Gαz plays a key role in β-cell signaling that becomes dysfunctional in the type 1 diabetes setting, accelerating the death of β-cells, which promotes further accumulation of immune cells in the pancreatic islets, and inhibiting a restorative proliferative response. Copyright © 2017 Endocrine Society.
Shen, Shuijie; Li, Lei; Ding, Xinxin; Zheng, Jiang
2014-01-01
Pulmonary toxicity of styrene is initiated by cytochromes P450-dependent metabolic activation. P450 2E1 and P450 2F2 are considered to be two main cytochrome P450 (CYP) enzymes responsible for styrene metabolism in mice. The objective of the current study was to determine the correlation between the formation of styrene metabolites (i.e. styrene oxide and 4-vinylphenol) and pulmonary toxicity of styrene, using Cyp2e1- and Cyp2f2-null mouse models. Dramatic decrease in the formation of styrene glycol and 4-vinylphenol was found in Cyp2f2-null mouse lung microsomes, relative to that in the wild-type mouse lung microsomes. However, no significant difference in the production of the styrene metabolites was observed between lung microsomes obtained from Cyp2e1-null and the wild-type mice. The knock–out and wild-type mice were treated with styrene (6.0 mmol/kg, ip), and cell counts and LDH activity in bronchoalveolar lavage fluids were monitored to evaluate the pulmonary toxicity induced by styrene. Cyp2e1-null mice displayed similar susceptibility to lung toxicity of styrene as the wild-type animals. However, Cyp2f2-null mice were resistant to styrene-induced pulmonary toxicity. In conclusion, both P450 2E1 and P450 2F2 are responsible for the metabolic activation of styrene. The latter enzyme plays an important role in styrene-induced pulmonary toxicity. Both styrene oxide and 4-vinylphenol are suggested to participate in the development of lung injury induced by styrene. PMID:24320693
Shen, Shuijie; Li, Lei; Ding, Xinxin; Zheng, Jiang
2014-01-21
Pulmonary toxicity of styrene is initiated by cytochromes P450-dependent metabolic activation. P450 2E1 and P450 2F2 are considered to be two main cytochrome P450 enzymes responsible for styrene metabolism in mice. The objective of the current study was to determine the correlation between the formation of styrene metabolites (i.e., styrene oxide and 4-vinylphenol) and pulmonary toxicity of styrene, using Cyp2e1- and Cyp2f2-null mouse models. A dramatic decrease in the formation of styrene glycol and 4-vinylphenol was found in Cyp2f2-null mouse lung microsomes relative to that in the wild-type mouse lung microsomes; however, no significant difference in the production of the styrene metabolites was observed between lung microsomes obtained from Cyp2e1-null and the wild-type mice. The knockout and wild-type mice were treated with styrene (6.0 mmol/kg, ip), and cell counts and LDH activity in bronchoalveolar lavage fluids were monitored to evaluate the pulmonary toxicity induced by styrene. Cyp2e1-null mice displayed a susceptibility to lung toxicity of styrene similar to that of the wild-type animals; however, Cyp2f2-null mice were resistant to styrene-induced pulmonary toxicity. In conclusion, both P450 2E1 and P450 2F2 are responsible for the metabolic activation of styrene. The latter enzyme plays an important role in styrene-induced pulmonary toxicity. Both styrene oxide and 4-vinylphenol are suggested to participate in the development of lung injury induced by styrene.
Zhang, Yu-Kun Jennifer; Yeager, Ronnie L; Tanaka, Yuji; Klaassen, Curtis D
2010-06-15
Oxidative stress has been proposed as an important promoter of the progression of fatty liver diseases. The current study investigates the potential functions of the Nrf2-Keap1 signaling pathway, an important hepatic oxidative stress sensor, in a rodent fatty liver model. Mice with no (Nrf2-null), normal (wild type, WT), and enhanced (Keap1 knockdown, K1-kd) expression of Nrf2 were fed a methionine- and choline-deficient (MCD) diet or a control diet for 5 days. Compared to WT mice, the MCD diet-caused hepatosteatosis was more severe in the Nrf2-null mice and less in the K1-kd mice. The Nrf2-null mice had lower hepatic glutathione and exhibited more lipid peroxidation, whereas the K1-kd mice had the highest amount of glutathione in the liver and developed the least lipid peroxidation among the three genotypes fed the MCD diet. The Nrf2 signaling pathway was activated by the MCD diet, and the Nrf2-targeted cytoprotective genes Nqo1 and Gstalpha1/2 were induced in WT and even more in K1-kd mice. In addition, Nrf2-null mice on both control and MCD diets exhibited altered expression profiles of fatty acid metabolism genes, indicating Nrf2 may influence lipid metabolism in liver. For example, mRNA levels of long chain fatty acid translocase CD36 and the endocrine hormone Fgf21 were higher in livers of Nrf2-null mice and lower in the K1-kd mice than WT mice fed the MCD diet. Taken together, these observations indicate that Nrf2 could decelerate the onset of fatty livers caused by the MCD diet by increasing hepatic antioxidant and detoxification capabilities. Copyright 2010. Published by Elsevier Inc.
Ghosh, Soma; Sur, Surojit; Yerram, Sashidhar R.; Rago, Carlo; Bhunia, Anil K.; Hossain, M. Zulfiquer; Paun, Bogdan C.; Ren, Yunzhao R.; Iacobuzio-Donahue, Christine A.; Azad, Nilofer A.; Kern, Scott E.
2014-01-01
Large-magnitude numerical distinctions (>10-fold) among drug responses of genetically contrasting cancers were crucial for guiding the development of some targeted therapies. Similar strategies brought epidemiological clues and prevention goals for genetic diseases. Such numerical guides, however, were incomplete or low magnitude for Fanconi anemia pathway (FANC) gene mutations relevant to cancer in FANC-mutation carriers (heterozygotes). We generated a four-gene FANC-null cancer panel, including the engineering of new PALB2/FANCN-null cancer cells by homologous recombination. A characteristic matching of FANCC-null, FANCG-null, BRCA2/FANCD1-null, and PALB2/FANCN-null phenotypes was confirmed by uniform tumor regression on single-dose cross-linker therapy in mice and by shared chemical hypersensitivities to various inter-strand cross-linking agents and γ-radiation in vitro. Some compounds, however, had contrasting magnitudes of sensitivity; a strikingly high (19- to 22-fold) hypersensitivity was seen among PALB2-null and BRCA2-null cells for the ethanol metabolite, acetaldehyde, associated with widespread chromosomal breakage at a concentration not producing breaks in parental cells. Because FANC-defective cancer cells can share or differ in their chemical sensitivities, patterns of selective hypersensitivity hold implications for the evolutionary understanding of this pathway. Clinical decisions for cancer-relevant prevention and management of FANC-mutation carriers could be modified by expanded studies of high-magnitude sensitivities. PMID:24200853
Null conformal Killing-Yano tensors and Birkhoff theorem
NASA Astrophysics Data System (ADS)
Ferrando, Joan Josep; Sáez, Juan Antonio
2016-04-01
We study the space-times admitting a null conformal Killing-Yano tensor whose divergence defines a Killing vector. We analyze the similarities and differences with the recently studied non null case (Ferrando and Sáez in Gen Relativ Gravit 47:1911, 2015). The results by Barnes concerning the Birkhoff theorem for the case of null orbits are analyzed and generalized.
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Default Bayes Factors for Model Selection in Regression
ERIC Educational Resources Information Center
Rouder, Jeffrey N.; Morey, Richard D.
2012-01-01
In this article, we present a Bayes factor solution for inference in multiple regression. Bayes factors are principled measures of the relative evidence from data for various models or positions, including models that embed null hypotheses. In this regard, they may be used to state positive evidence for a lack of an effect, which is not possible…
ERIC Educational Resources Information Center
Modupe, Ale Veronica; Babafemi, Kolawole Emmanuel
2015-01-01
The study examined the various means of solving contradictions of predictive studies of University Matriculation Examination in Nigeria. The study used a sample size of 35 studies on predictive validity of University Matriculation Examination in Nigeria, which was purposively selected to have met the criteria for meta-analysis. Two null hypotheses…
FORMED: Bringing Formal Methods to the Engineering Desktop
2016-02-01
integrates formal verification into software design and development by precisely defining semantics for a restricted subset of the Unified Modeling...input-output contract satisfaction and absence of null pointer dereferences. 15. SUBJECT TERMS Formal Methods, Software Verification , Model-Based...Domain specific languages (DSLs) drive both implementation and formal verification
Why Might Relative Fit Indices Differ between Estimators?
ERIC Educational Resources Information Center
Weng, Li-Jen; Cheng, Chung-Ping
1997-01-01
Relative fit indices using the null model as the reference point in computation may differ across estimation methods, as this article illustrates by comparing maximum likelihood, ordinary least squares, and generalized least squares estimation in structural equation modeling. The illustration uses a covariance matrix for six observed variables…
Memon, Mushtaq A.; Anway, Matthew D.; Covert, Trevor R.; Uzumcu, Mehmet; Skinner, Michael K.
2008-01-01
The role transforming growth factor beta (TGFb) isoforms TGFb1, TGFb2 and TGFb3 have in the regulation of embryonic gonadal development was investigated with the use of null-mutant (i.e. knockout) mice for each of the TGFb isoforms. Late embryonic gonadal development was investigated because homozygote TGFb null-mutant mice generally die around birth, with some embryonic loss as well. In the testis, the TGFb1 null-mutant mice had a decrease in the number of germ cells at birth, postnatal day 0 (P0). In the testis, the TGFb2 null-mutant mice had a decrease in the number of seminiferous cords at embryonic day 15 (E15). In the ovary, the TGFb2 null-mutant mice had an increase in the number of germ cells at P0. TGFb isoforms appear to have a role in gonadal development, but interactions between the isoforms is speculated to compensate in the different TGFb isoform null-mutant mice. PMID:18790002
An alternative approach to confidence interval estimation for the win ratio statistic.
Luo, Xiaodong; Tian, Hong; Mohanty, Surya; Tsai, Wei Yann
2015-03-01
Pocock et al. (2012, European Heart Journal 33, 176-182) proposed a win ratio approach to analyzing composite endpoints comprised of outcomes with different clinical priorities. In this article, we establish a statistical framework for this approach. We derive the null hypothesis and propose a closed-form variance estimator for the win ratio statistic in all pairwise matching situation. Our simulation study shows that the proposed variance estimator performs well regardless of the magnitude of treatment effect size and the type of the joint distribution of the outcomes. © 2014, The International Biometric Society.
Choi, Yun-Sik; Horning, Paul; Aten, Sydney; Karelina, Kate; Alzate-Correa, Diego; Arthur, J. Simon C.; Hoyt, Kari R.; Obrietan, Karl
2017-01-01
Mitogen-activated protein kinase (MAPK) signaling has been implicated in a wide range of neuronal processes, including development, plasticity, and viability. One of the principal downstream targets of both the extracellular signal-regulated kinase/MAPK pathway and the p38 MAPK pathway is Mitogen- and Stress-activated protein Kinase 1 (MSK1). Here, we sought to understand the role that MSK1 plays in neuroprotection against excitotoxic stimulation in the hippocampus. To this end, we utilized immunohistochemical labeling, a MSK1 null mouse line, cell viability assays, and array-based profiling approaches. Initially, we show that MSK1 is broadly expressed within the major neuronal cell layers of the hippocampus and that status epilepticus drives acute induction of MSK1 activation. In response to the status epilepticus paradigm, MSK1 KO mice exhibited a striking increase in vulnerability to pilocarpine-evoked cell death within the CA1 and CA3 cell layers. Further, cultured MSK1 null neurons exhibited a heighted level of N-methyl-D-aspartate-evoked excitotoxicity relative to wild-type neurons, as assessed using the lactate dehydrogenase assay. Given these findings, we examined the hippocampal transcriptional profile of MSK1 null mice. Affymetrix array profiling revealed that MSK1 deletion led to the significant (>1.25-fold) downregulation of 130 genes and an upregulation of 145 genes. Notably, functional analysis indicated that a subset of these genes contribute to neuroprotective signaling networks. Together, these data provide important new insights into the mechanism by which the MAPK/MSK1 signaling cassette confers neuroprotection against excitotoxic insults. Approaches designed to upregulate or mimic the functional effects of MSK1 may prove beneficial against an array of degenerative processes resulting from excitotoxic insults. PMID:28870089
Mickiewicz, Beata; Shin, Sung Y.; Pozzi, Ambra; Vogel, Hans J.; Clark, Andrea L.
2016-01-01
The risk of developing post traumatic osteoarthritis (PTOA) following joint injury is high. Furthering our understanding of the molecular mechanisms underlying PTOA and/or identifying novel biomarkers for early detection may help improve treatment outcomes. Increased expression of integrin α1β1 and inhibition of epidermal growth factor receptor (EGFR) signaling protect the knee from spontaneous OA, however the impact of the integrin α1β1/EGFR axis on PTOA is currently unknown. We sought to determine metabolic changes in serum samples collected from wild type and integrin α1-null mice that underwent surgery to destabilize the medial meniscus and were treated with the EGFR inhibitor erlotinib. Following 1H nuclear magnetic resonance spectroscopy we generated multivariate statistical models that distinguished between the metabolic profiles of erlotinib- versus vehicle-treated mice, and the integrin α1-null versus wild type mouse genotype. Our results show the sex dependent effects of erlotinib treatment and highlight glutamine as a metabolite that counteracts this treatment. Furthermore, we identified a set of metabolites associated with increased reactive oxygen species production, susceptibility to OA and regulation of TRP channels in α1-null mice. Our study indicates that systemic pharmacological and genetic factors have a greater effect on serum metabolic profiles than site specific factors such as surgery. PMID:26784366
Stucki, S; Orozco-terWengel, P; Forester, B R; Duruz, S; Colli, L; Masembe, C; Negrini, R; Landguth, E; Jones, M R; Bruford, M W; Taberlet, P; Joost, S
2017-09-01
With the increasing availability of both molecular and topo-climatic data, the main challenges facing landscape genomics - that is the combination of landscape ecology with population genomics - include processing large numbers of models and distinguishing between selection and demographic processes (e.g. population structure). Several methods address the latter, either by estimating a null model of population history or by simultaneously inferring environmental and demographic effects. Here we present samβada, an approach designed to study signatures of local adaptation, with special emphasis on high performance computing of large-scale genetic and environmental data sets. samβada identifies candidate loci using genotype-environment associations while also incorporating multivariate analyses to assess the effect of many environmental predictor variables. This enables the inclusion of explanatory variables representing population structure into the models to lower the occurrences of spurious genotype-environment associations. In addition, samβada calculates local indicators of spatial association for candidate loci to provide information on whether similar genotypes tend to cluster in space, which constitutes a useful indication of the possible kinship between individuals. To test the usefulness of this approach, we carried out a simulation study and analysed a data set from Ugandan cattle to detect signatures of local adaptation with samβada, bayenv, lfmm and an F ST outlier method (FDIST approach in arlequin) and compare their results. samβada - an open source software for Windows, Linux and Mac OS X available at http://lasig.epfl.ch/sambada - outperforms other approaches and better suits whole-genome sequence data processing. © 2016 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.
Stram, Daniel O; Leigh Pearce, Celeste; Bretsky, Phillip; Freedman, Matthew; Hirschhorn, Joel N; Altshuler, David; Kolonel, Laurence N; Henderson, Brian E; Thomas, Duncan C
2003-01-01
The US National Cancer Institute has recently sponsored the formation of a Cohort Consortium (http://2002.cancer.gov/scpgenes.htm) to facilitate the pooling of data on very large numbers of people, concerning the effects of genes and environment on cancer incidence. One likely goal of these efforts will be generate a large population-based case-control series for which a number of candidate genes will be investigated using SNP haplotype as well as genotype analysis. The goal of this paper is to outline the issues involved in choosing a method of estimating haplotype-specific risk estimates for such data that is technically appropriate and yet attractive to epidemiologists who are already comfortable with odds ratios and logistic regression. Our interest is to develop and evaluate extensions of methods, based on haplotype imputation, that have been recently described (Schaid et al., Am J Hum Genet, 2002, and Zaykin et al., Hum Hered, 2002) as providing score tests of the null hypothesis of no effect of SNP haplotypes upon risk, which may be used for more complex tasks, such as providing confidence intervals, and tests of equivalence of haplotype-specific risks in two or more separate populations. In order to do so we (1) develop a cohort approach towards odds ratio analysis by expanding the E-M algorithm to provide maximum likelihood estimates of haplotype-specific odds ratios as well as genotype frequencies; (2) show how to correct the cohort approach, to give essentially unbiased estimates for population-based or nested case-control studies by incorporating the probability of selection as a case or control into the likelihood, based on a simplified model of case and control selection, and (3) finally, in an example data set (CYP17 and breast cancer, from the Multiethnic Cohort Study) we compare likelihood-based confidence interval estimates from the two methods with each other, and with the use of the single-imputation approach of Zaykin et al. applied under both null and alternative hypotheses. We conclude that so long as haplotypes are well predicted by SNP genotypes (we use the Rh2 criteria of Stram et al. [1]) the differences between the three methods are very small and in particular that the single imputation method may be expected to work extremely well. Copyright 2003 S. Karger AG, Basel
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.
Warner, Lisa M; Wolff, Julia K; Ziegelmann, Jochen P; Schwarzer, Ralf; Wurm, Susanne
2016-10-01
A randomised controlled trial (RCT) was conducted to evaluate a three-hour face-to-face physical activity (PA) intervention in community-dwelling older German adults with four groups: The intervention group (IG) received behaviour change techniques (BCTs) based on the health action process approach plus a views-on-ageing component to increase PA. The second intervention group 'planning' (IGpl) contained the same BCTs, only substituted the views-on-ageing component against an additional planning task. An active control group received the same BCTs, however, targeting volunteering instead of PA. A passive control group (PCG) received no intervention. The RCT comprised 5 time-points over 14 months in N = 310 participants aged 64+. Self-reported as well as accelerometer-assessed PA. Neither PA measure increased in the IG as compared to the other groups at any point in time. Bayes analyses supported these null-effects. A possible explanation for this null-finding in line with a recent meta-analysis is that some self-regulatory BCTs may be ineffective or even negatively associated with PA in interventions for older adults as they are assumed to be less acceptable for older adults. This interpretation was supported by observed reluctance to participate in self-regulatory BCTs in the current study.
OBSERVATION OF MAGNETIC RECONNECTION AT A 3D NULL POINT ASSOCIATED WITH A SOLAR ERUPTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, J. Q.; Yang, K.; Cheng, X.
Magnetic null has long been recognized as a special structure serving as a preferential site for magnetic reconnection (MR). However, the direct observational study of MR at null-points is largely lacking. Here, we show the observations of MR around a magnetic null associated with an eruption that resulted in an M1.7 flare and a coronal mass ejection. The Geostationary Operational Environmental Satellites X-ray profile of the flare exhibited two peaks at ∼02:23 UT and ∼02:40 UT on 2012 November 8, respectively. Based on the imaging observations, we find that the first and also primary X-ray peak was originated from MRmore » in the current sheet (CS) underneath the erupting magnetic flux rope (MFR). On the other hand, the second and also weaker X-ray peak was caused by MR around a null point located above the pre-eruption MFR. The interaction of the null point and the erupting MFR can be described as a two-step process. During the first step, the erupting and fast expanding MFR passed through the null point, resulting in a significant displacement of the magnetic field surrounding the null. During the second step, the displaced magnetic field started to move back, resulting in a converging inflow and subsequently the MR around the null. The null-point reconnection is a different process from the current sheet reconnection in this flare; the latter is the cause of the main peak of the flare, while the former is the cause of the secondary peak of the flare and the conspicuous high-lying cusp structure.« less
Planck 2015 results. III. LFI systematic uncertainties
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaglia, P.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Burigana, C.; Butler, R. C.; Calabrese, E.; Catalano, A.; Christensen, P. R.; Colombo, L. P. L.; Cruz, M.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Doré, O.; Ducout, A.; Dupac, X.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Frailis, M.; Franceschet, C.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D. L.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Knoche, J.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Meinhold, P. R.; Mennella, A.; Migliaccio, M.; Mitra, S.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Nati, F.; Natoli, P.; Noviello, F.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Pearson, T. J.; Perdereau, O.; Pettorino, V.; Piacentini, F.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Stolyarov, V.; Stompor, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vassallo, T.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zibin, J. P.; Zonca, A.
2016-09-01
We present the current accounting of systematic effect uncertainties for the Low Frequency Instrument (LFI) that are relevant to the 2015 release of the Planck cosmological results, showing the robustness and consistency of our data set, especially for polarization analysis. We use two complementary approaches: (I) simulations based on measured data and physical models of the known systematic effects; and (II) analysis of difference maps containing the same sky signal ("null-maps"). The LFI temperature data are limited by instrumental noise. At large angular scales the systematic effects are below the cosmic microwave background (CMB) temperature power spectrum by several orders of magnitude. In polarization the systematic uncertainties are dominated by calibration uncertainties and compete with the CMB E-modes in the multipole range 10-20. Based on our model of all known systematic effects, we show that these effects introduce a slight bias of around 0.2σ on the reionization optical depth derived from the 70GHz EE spectrum using the 30 and 353GHz channels as foreground templates. At 30GHz the systematic effects are smaller than the Galactic foreground at all scales in temperature and polarization, which allows us to consider this channel as a reliable template of synchrotron emission. We assess the residual uncertainties due to LFI effects on CMB maps and power spectra after component separation and show that these effects are smaller than the CMB amplitude at all scales. We also assess the impact on non-Gaussianity studies and find it to be negligible. Some residuals still appear in null maps from particular sky survey pairs, particularly at 30 GHz, suggesting possible straylight contamination due to an imperfect knowledge of the beam far sidelobes.
Seasonal changes in the assembly mechanisms structuring tropical fish communities.
Fitzgerald, Daniel B; Winemiller, Kirk O; Sabaj Pérez, Mark H; Sousa, Leandro M
2017-01-01
Despite growing interest in trait-based approaches to community assembly, little attention has been given to seasonal variation in trait distribution patterns. Mobile animals can rapidly mediate influences of environmental factors and species interactions through dispersal, suggesting that the relative importance of different assembly mechanisms can vary over short time scales. This study analyzes seasonal changes in functional trait distributions of tropical fishes in the Xingu River, a major tributary of the Amazon with large predictable temporal variation in hydrologic conditions and species density. Comparison of observed functional diversity revealed that species within wet-season assemblages were more functionally similar than those in dry-season assemblages. Further, species within wet-season assemblages were more similar than random expectations based on null model predictions. Higher functional richness within dry season communities is consistent with increased niche complementarity during the period when fish densities are highest and biotic interactions should be stronger; however, null model tests suggest that stochastic factors or a combination of assembly mechanisms influence dry-season assemblages. These results demonstrate that the relative influence of community assembly mechanisms can vary seasonally in response to changing abiotic conditions, and suggest that studies attempting to infer a single dominant mechanism from functional patterns may overlook important aspects of the assembly process. During the prolonged flood pulse of the wet season, expanded habitat and lower densities of aquatic organisms likely reduce the influence of competition and predation. This temporal shift in the influence of different assembly mechanisms, rather than any single mechanism, may play a large role in maintaining the structure and diversity of tropical rivers and perhaps other dynamic and biodiverse systems. © 2016 by the Ecological Society of America.
Planck 2015 results: III. LFI systematic uncertainties
Ade, P. A. R.; Aumont, J.; Baccigalupi, C.; ...
2016-09-20
In this paper, we present the current accounting of systematic effect uncertainties for the Low Frequency Instrument (LFI) that are relevant to the 2015 release of the Planck cosmological results, showing the robustness and consistency of our data set, especially for polarization analysis. We use two complementary approaches: (i) simulations based on measured data and physical models of the known systematic effects; and (ii) analysis of difference maps containing the same sky signal (“null-maps”). The LFI temperature data are limited by instrumental noise. At large angular scales the systematic effects are below the cosmic microwave background (CMB) temperature power spectrummore » by several orders of magnitude. In polarization the systematic uncertainties are dominated by calibration uncertainties and compete with the CMB E-modes in the multipole range 10–20. Based on our model of all known systematic effects, we show that these effects introduce a slight bias of around 0.2σ on the reionization optical depth derived from the 70GHz EE spectrum using the 30 and 353GHz channels as foreground templates. At 30GHz the systematic effects are smaller than the Galactic foreground at all scales in temperature and polarization, which allows us to consider this channel as a reliable template of synchrotron emission. We assess the residual uncertainties due to LFI effects on CMB maps and power spectra after component separation and show that these effects are smaller than the CMB amplitude at all scales. We also assess the impact on non-Gaussianity studies and find it to be negligible. Finally, some residuals still appear in null maps from particular sky survey pairs, particularly at 30 GHz, suggesting possible straylight contamination due to an imperfect knowledge of the beam far sidelobes.« less
Planck 2015 results: III. LFI systematic uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aumont, J.; Baccigalupi, C.
In this paper, we present the current accounting of systematic effect uncertainties for the Low Frequency Instrument (LFI) that are relevant to the 2015 release of the Planck cosmological results, showing the robustness and consistency of our data set, especially for polarization analysis. We use two complementary approaches: (i) simulations based on measured data and physical models of the known systematic effects; and (ii) analysis of difference maps containing the same sky signal (“null-maps”). The LFI temperature data are limited by instrumental noise. At large angular scales the systematic effects are below the cosmic microwave background (CMB) temperature power spectrummore » by several orders of magnitude. In polarization the systematic uncertainties are dominated by calibration uncertainties and compete with the CMB E-modes in the multipole range 10–20. Based on our model of all known systematic effects, we show that these effects introduce a slight bias of around 0.2σ on the reionization optical depth derived from the 70GHz EE spectrum using the 30 and 353GHz channels as foreground templates. At 30GHz the systematic effects are smaller than the Galactic foreground at all scales in temperature and polarization, which allows us to consider this channel as a reliable template of synchrotron emission. We assess the residual uncertainties due to LFI effects on CMB maps and power spectra after component separation and show that these effects are smaller than the CMB amplitude at all scales. We also assess the impact on non-Gaussianity studies and find it to be negligible. Finally, some residuals still appear in null maps from particular sky survey pairs, particularly at 30 GHz, suggesting possible straylight contamination due to an imperfect knowledge of the beam far sidelobes.« less
Dulau, Violaine; Estrade, Vanessa; Fayan, Jacques
2017-01-01
Photo-identification surveys of Indo-Pacific bottlenose dolphins were conducted from 2009 to 2014 off Reunion Island (55°E33'/21°S07'), in the Indian Ocean. Robust Design models were applied to produce the most reliable estimate of population abundance and survival rate, while accounting for temporary emigration from the survey area (west coast). The sampling scheme consisted of a five-month (June-October) sampling period in each year of the study. The overall population size at Reunion was estimated to be 72 individuals (SE = 6.17, 95%CI = 61-85), based on a random temporary emigration (γ") of 0.096 and a proportion of 0.70 (SE = 0.03) distinct individuals. The annual survival rate was 0.93 (±0.018 SE, 95%CI = 0.886-0.958) and was constant over time and between sexes. Models considering gender groups indicated different movement patterns between males and females. Males showed null or quasi-null temporary emigration (γ" = γ' < 0.01), while females showed a random temporary emigration (γ") of 0.10, suggesting that a small proportion of females was outside the survey area during each primary sampling period. Sex-specific temporary migration patterns were consistent with movement and residency patterns observed in other areas. The Robust Design approach provided an appropriate sampling scheme for deriving island-associated population parameters, while allowing to restrict survey effort both spatially (i.e. west coast only) and temporally (five months per year). Although abundance and survival were stable over the six years, the small population size of fewer than 100 individuals suggested that this population is highly vulnerable. Priority should be given to reducing any potential impact of human activity on the population and its habitat.
Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A
2018-06-22
Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.
Kager, Leo; Bruce, Lesley J; Zeitlhofer, Petra; Flatt, Joanna F; Maia, Tabita M; Ribeiro, M Leticia; Fahrner, Bernhard; Fritsch, Gerhard; Boztug, Kaan; Haas, Oskar A
2017-03-01
We describe the second patient with anionic exchanger 1/band 3 null phenotype (band 3 null VIENNA ), which was caused by a novel nonsense mutation c.1430C>A (p.Ser477X) in exon 12 of SLC4A1. We also update on the previous band 3 null COIMBRA patient, thereby elucidating the physiological implications of total loss of AE1/band 3. Besides transfusion-dependent severe hemolytic anemia and complete distal renal tubular acidosis, dyserythropoiesis was identified in the band 3 null VIENNA patient, suggesting a role for band 3 in erythropoiesis. Moreover, we also, for the first time, report that long-term survival is possible in band 3 null patients. © 2016 Wiley Periodicals, Inc.
Ghosh, Soma; Sur, Surojit; Yerram, Sashidhar R; Rago, Carlo; Bhunia, Anil K; Hossain, M Zulfiquer; Paun, Bogdan C; Ren, Yunzhao R; Iacobuzio-Donahue, Christine A; Azad, Nilofer A; Kern, Scott E
2014-01-01
Large-magnitude numerical distinctions (>10-fold) among drug responses of genetically contrasting cancers were crucial for guiding the development of some targeted therapies. Similar strategies brought epidemiological clues and prevention goals for genetic diseases. Such numerical guides, however, were incomplete or low magnitude for Fanconi anemia pathway (FANC) gene mutations relevant to cancer in FANC-mutation carriers (heterozygotes). We generated a four-gene FANC-null cancer panel, including the engineering of new PALB2/FANCN-null cancer cells by homologous recombination. A characteristic matching of FANCC-null, FANCG-null, BRCA2/FANCD1-null, and PALB2/FANCN-null phenotypes was confirmed by uniform tumor regression on single-dose cross-linker therapy in mice and by shared chemical hypersensitivities to various inter-strand cross-linking agents and γ-radiation in vitro. Some compounds, however, had contrasting magnitudes of sensitivity; a strikingly high (19- to 22-fold) hypersensitivity was seen among PALB2-null and BRCA2-null cells for the ethanol metabolite, acetaldehyde, associated with widespread chromosomal breakage at a concentration not producing breaks in parental cells. Because FANC-defective cancer cells can share or differ in their chemical sensitivities, patterns of selective hypersensitivity hold implications for the evolutionary understanding of this pathway. Clinical decisions for cancer-relevant prevention and management of FANC-mutation carriers could be modified by expanded studies of high-magnitude sensitivities. Copyright © 2014 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.
On the Model-Based Bootstrap with Missing Data: Obtaining a "P"-Value for a Test of Exact Fit
ERIC Educational Resources Information Center
Savalei, Victoria; Yuan, Ke-Hai
2009-01-01
Evaluating the fit of a structural equation model via bootstrap requires a transformation of the data so that the null hypothesis holds exactly in the sample. For complete data, such a transformation was proposed by Beran and Srivastava (1985) for general covariance structure models and applied to structural equation modeling by Bollen and Stine…
NASA Astrophysics Data System (ADS)
Roy, S. R.; Banerjee, S. K.
1992-11-01
A homogeneous Bianchi type VIh cosmological model filled with perfect fluid, null electromagnetic field and streaming neutrinos is obtained for which the free gravitational field is of the electric type. The barotropic equation of statep = (γ-1)ɛ is imposed in the particular case of Bianchi VI0 string models. Various physical and kinematical properties of the models are discussed.
A comparison of random draw and locally neutral models for the avifauna of an English woodland.
Dolman, Andrew M; Blackburn, Tim M
2004-06-03
Explanations for patterns observed in the structure of local assemblages are frequently sought with reference to interactions between species, and between species and their local environment. However, analyses of null models, where non-interactive local communities are assembled from regional species pools, have demonstrated that much of the structure of local assemblages remains in simulated assemblages where local interactions have been excluded. Here we compare the ability of two null models to reproduce the breeding bird community of Eastern Wood, a 16-hectare woodland in England, UK. A random draw model, in which there is complete annual replacement of the community by immigrants from the regional pool, is compared to a locally neutral community model, in which there are two additional parameters describing the proportion of the community replaced annually (per capita death rate) and the proportion of individuals recruited locally rather than as immigrants from the regional pool. Both the random draw and locally neutral model are capable of reproducing with significant accuracy several features of the observed structure of the annual Eastern Wood breeding bird community, including species relative abundances, species richness and species composition. The two additional parameters present in the neutral model result in a qualitatively more realistic representation of the Eastern Wood breeding bird community, particularly of its dynamics through time. The fact that these parameters can be varied, allows for a close quantitative fit between model and observed communities to be achieved, particularly with respect to annual species richness and species accumulation through time. The presence of additional free parameters does not detract from the qualitative improvement in the model and the neutral model remains a model of local community structure that is null with respect to species differences at the local scale. The ability of this locally neutral model to describe a larger number of woodland bird communities with either little variation in its parameters or with variation explained by features local to the woods themselves (such as the area and isolation of a wood) will be a key subsequent test of its relevance.
Diffusion in Colocation Contact Networks: The Impact of Nodal Spatiotemporal Dynamics.
Thomas, Bryce; Jurdak, Raja; Zhao, Kun; Atkinson, Ian
2016-01-01
Temporal contact networks are studied to understand dynamic spreading phenomena such as communicable diseases or information dissemination. To establish how spatiotemporal dynamics of nodes impact spreading potential in colocation contact networks, we propose "inducement-shuffling" null models which break one or more correlations between times, locations and nodes. By reconfiguring the time and/or location of each node's presence in the network, these models induce alternative sets of colocation events giving rise to contact networks with varying spreading potential. This enables second-order causal reasoning about how correlations in nodes' spatiotemporal preferences not only lead to a given contact network but ultimately influence the network's spreading potential. We find the correlation between nodes and times to be the greatest impediment to spreading, while the correlation between times and locations slightly catalyzes spreading. Under each of the presented null models we measure both the number of contacts and infection prevalence as a function of time, with the surprising finding that the two have no direct causality.
Approaches to Numerical Relativity
NASA Astrophysics Data System (ADS)
d'Inverno, Ray
2005-07-01
Introduction Ray d'Inverno; Preface C. J. S. Clarke; Part I. Theoretical Approaches: 1. Numerical relativity on a transputer array Ray d'Inverno; 2. Some aspects of the characteristic initial value problem in numerical relativity Nigel Bishop; 3. The characteristic initial value problem in general relativity J. M. Stewart; 4. Algebraic approachs to the characteristic initial value problem in general relativity Jõrg Frauendiener; 5. On hyperboidal hypersurfaces Helmut Friedrich; 6. The initial value problem on null cones J. A. Vickers; 7. Introduction to dual-null dynamics S. A. Hayward; 8. On colliding plane wave space-times J. B. Griffiths; 9. Boundary conditions for the momentum constraint Niall O Murchadha; 10. On the choice of matter model in general relativity A. D. Rendall; 11. A mathematical approach to numerical relativity J. W. Barrett; 12. Making sense of the effects of rotation in general relativity J. C. Miller; 13. Stability of charged boson stars and catastrophe theory Franz E. Schunck, Fjodor V. Kusmartsev and Eckehard W. Mielke; Part II. Practical Approaches: 14. Numerical asymptotics R. Gómez and J. Winicour; 15. Instabilities in rapidly rotating polytropes Scott C. Smith and Joan M. Centrella; 16. Gravitational radiation from coalescing binary neutron stars Ken-Ichi Oohara and Takashi Nakamura; 17. 'Critical' behaviour in massless scalar field collapse M. W. Choptuik; 18. Goudunov-type methods applied to general relativistic gravitational collapse José Ma. Ibánez, José Ma. Martí, Juan A. Miralles and J. V. Romero; 19. Astrophysical sources of gravitational waves and neutrinos Silvano Bonazzola, Eric Gourgoulhon, Pawel Haensel and Jean-Alain Marck; 20. Gravitational radiation from triaxial core collapse Jean-Alain Marck and Silvano Bonazzola; 21. A vacuum fully relativistic 3D numerical code C. Bona and J. Massó; 22. Solution of elliptic equations in numerical relativity using multiquadrics M. R. Dubal, S. R. Oliveira and R. A. Matzner; 23. Self-gravitating thin disks around rotating black holes A. Lanza; 24. An ADI and causal reconnection Gabrielle D. Allen and Bernard F. Schutz; 25. Time-symmetric ADI and causal reconnection Miguel Alcubierre and Bernard F. Schutz; 26. The numerical study of topological defects E. P. S. Shellard; 27. Computations of bubble growth during the cosmological quark-hadron transition J. C. Miller and O. Pantano; 28. Initial data of axisymmetric gravitational waves with a cosmological constant Ken-Ichi Nakao, Kei-Ichi Maeda, Takashi Nakamura and Ken-Ichi Oohara.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olshevsky, Vyacheslav; Lapenta, Giovanni; Divin, Andrey
We use kinetic particle-in-cell and MHD simulations supported by an observational data set to investigate magnetic reconnection in clusters of null points in space plasma. The magnetic configuration under investigation is driven by fast adiabatic flux rope compression that dissipates almost half of the initial magnetic field energy. In this phase powerful currents are excited producing secondary instabilities, and the system is brought into a state of “intermittent turbulence” within a few ion gyro-periods. Reconnection events are distributed all over the simulation domain and energy dissipation is rather volume-filling. Numerous spiral null points interconnected via their spines form null linesmore » embedded into magnetic flux ropes; null point pairs demonstrate the signatures of torsional spine reconnection. However, energy dissipation mainly happens in the shear layers formed by adjacent flux ropes with oppositely directed currents. In these regions radial null pairs are spontaneously emerging and vanishing, associated with electron streams and small-scale current sheets. The number of spiral nulls in the simulation outweighs the number of radial nulls by a factor of 5–10, in accordance with Cluster observations in the Earth's magnetosheath. Twisted magnetic fields with embedded spiral null points might indicate the regions of major energy dissipation for future space missions such as the Magnetospheric Multiscale Mission.« less
Nichols, Matthew; Elustondo, Pia A; Warford, Jordan; Thirumaran, Aruloli; Pavlov, Evgeny V; Robertson, George S
2017-08-01
The effects of global mitochondrial calcium (Ca 2+ ) uniporter (MCU) deficiency on hypoxic-ischemic (HI) brain injury, neuronal Ca 2+ handling, bioenergetics and hypoxic preconditioning (HPC) were examined. Forebrain mitochondria isolated from global MCU nulls displayed markedly reduced Ca 2+ uptake and Ca 2+ -induced opening of the membrane permeability transition pore. Despite evidence that these effects should be neuroprotective, global MCU nulls and wild-type (WT) mice suffered comparable HI brain damage. Energetic stress enhanced glycolysis and depressed Complex I activity in global MCU null, relative to WT, cortical neurons. HI reduced forebrain NADH levels more in global MCU nulls than WT mice suggesting that increased glycolytic consumption of NADH suppressed Complex I activity. Compared to WT neurons, pyruvate dehydrogenase (PDH) was hyper-phosphorylated in MCU nulls at several sites that lower the supply of substrates for the tricarboxylic acid cycle. Elevation of cytosolic Ca 2+ with glutamate or ionomycin decreased PDH phosphorylation in MCU null neurons suggesting the use of alternative mitochondrial Ca 2+ transport. Under basal conditions, global MCU nulls showed similar increases of Ca 2+ handling genes in the hippocampus as WT mice subjected to HPC. We propose that long-term adaptations, common to HPC, in global MCU nulls compromise resistance to HI brain injury and disrupt HPC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umansky, M. V.; Ryutov, D. D.
Reduced MHD equations are used for studying toroidally symmetric plasma dynamics near the divertor null point. Numerical solution of these equations exhibits a plasma vortex localized at the null point with the time-evolution defined by interplay of the curvature drive, magnetic restoring force, and dissipation. Convective motion is easier to achieve for a second-order null (snowflake) divertor than for a regular x-point configuration, and the size of the convection zone in a snowflake configuration grows with plasma pressure at the null point. In conclusion, the trends in simulations are consistent with tokamak experiments which indicate the presence of enhanced transportmore » at the null point.« less
Current progress on TPFI nulling architectures at Jet Propulsion Laboratory
NASA Technical Reports Server (NTRS)
Gappinger, Robert O.; Wallace, J. Kent; Bartos, Randall D.; Macdonald, Daniel R.; Brown, Kenneth A.
2005-01-01
Infrared interferometric nulling is a promising technology for exoplanet detection. Nulling research for the Terrestrial Planet Finder Interferometer has been exploring a variety of interferometer architectures at the Jet Propulsion Laboratory (JPL).
Cardiomyocyte-specific desmin rescue of desmin null cardiomyopathy excludes vascular involvement.
Weisleder, Noah; Soumaka, Elisavet; Abbasi, Shahrzad; Taegtmeyer, Heinrich; Capetanaki, Yassemi
2004-01-01
Mice deficient in desmin, the muscle-specific member of the intermediate filament gene family, display defects in all muscle types and particularly in the myocardium. Desmin null hearts develop cardiomyocyte hypertrophy and dilated cardiomyopathy (DCM) characterized by extensive myocyte cell death, calcific fibrosis and multiple ultrastructural defects. Several lines of evidence suggest impaired vascular function in desmin null animals. To determine whether altered capillary function or an intrinsic cardiomyocyte defect is responsible for desmin null DCM, transgenic mice were generated to rescue desmin expression specifically to cardiomyocytes. Desmin rescue mice display a wild-type cardiac phenotype with no fibrosis or calcification in the myocardium and normalization of coronary flow. Cardiomyocyte ultrastructure is also restored to normal. Markers of hypertrophy upregulated in desmin null hearts return to wild-type levels in desmin rescue mice. Working hearts were perfused to assess coronary flow and cardiac power. Restoration of a wild-type cardiac phenotype in a desmin null background by expression of desmin specifically within cardiomyocyte indicates that defects in the desmin null heart are due to an intrinsic cardiomyocytes defect rather than compromised coronary circulation.
Left cardiac isomerism in the Sonic hedgehog null mouse.
Hildreth, Victoria; Webb, Sandra; Chaudhry, Bill; Peat, Jonathan D; Phillips, Helen M; Brown, Nigel; Anderson, Robert H; Henderson, Deborah J
2009-06-01
Sonic hedgehog (Shh) is a secreted morphogen necessary for the production of sidedness in the developing embryo. In this study, we describe the morphology of the atrial chambers and atrioventricular junctions of the Shh null mouse heart. We demonstrate that the essential phenotypic feature is isomerism of the left atrial appendages, in combination with an atrioventricular septal defect and a common atrioventricular junction. These malformations are known to be frequent in humans with left isomerism. To confirm the presence of left isomerism, we show that Pitx2c, a recognized determinant of morphological leftness, is expressed in the Shh null mutants on both the right and left sides of the inflow region, and on both sides of the solitary arterial trunk exiting from the heart. It has been established that derivatives of the second heart field expressing Isl1 are asymmetrically distributed in the developing normal heart. We now show that this population is reduced in the hearts from the Shh null mutants, likely contributing to the defects. To distinguish the consequences of reduced contributions from the second heart field from those of left-right patterning disturbance, we disrupted the movement of second heart field cells into the heart by expressing dominant-negative Rho kinase in the population of cells expressing Isl1. This resulted in absence of the vestibular spine, and presence of atrioventricular septal defects closely resembling those seen in the hearts from the Shh null mutants. The primary atrial septum, however, was well formed, and there was no evidence of isomerism of the atrial appendages, suggesting that these features do not relate to disruption of the contributions made by the second heart field. We demonstrate, therefore, that the Shh null mouse is a model of isomerism of the left atrial appendages, and show that the recognized associated malformations found at the venous pole of the heart in the setting of left isomerism are likely to arise from the loss of the effects of Shh in the establishment of laterality, combined with a reduced contribution made by cells derived from the second heart field.
Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.
Ji, Ming; Xiong, Chengjie; Grundman, Michael
2003-10-01
In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.
The Epistemology of Mathematical and Statistical Modeling: A Quiet Methodological Revolution
ERIC Educational Resources Information Center
Rodgers, Joseph Lee
2010-01-01
A quiet methodological revolution, a modeling revolution, has occurred over the past several decades, almost without discussion. In contrast, the 20th century ended with contentious argument over the utility of null hypothesis significance testing (NHST). The NHST controversy may have been at least partially irrelevant, because in certain ways the…
A neutral model of low-severity fire regimes
Don McKenzie; Amy E. Hessl
2008-01-01
Climate, topography, fuel loadings, and human activities all affect spatial and temporal patterns of fire occurrence. Because fire occurrence is a stochastic process, an understanding of baseline variability is necessary in order to identify constraints on surface fire regimes. With a suitable null, or neutral, model, characteristics of natural fire regimes estimated...
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
The Effect of Magnetic Topology on the Escape of Flare Particles
NASA Technical Reports Server (NTRS)
Antiochos, S. K.; Masson, S.; DeVore, C. R.
2012-01-01
Magnetic reconnection in the solar atmosphere is believed to be the driver of most solar explosive phenomena. Therefore, the topology of the coronal magnetic field is central to understanding the solar drivers of space weather. Of particular importance to space weather are the impulsive Solar Energetic particles that are associated with some CME/eruptive flare events. Observationally, the magnetic configuration of active regions where solar eruptions originate appears to agree with the standard eruptive flare model. According to this model, particles accelerated at the flare reconnection site should remain trapped in the corona and the ejected plasmoid. However, flare-accelerated particles frequently reach the Earth long before the CME does. We present a model that may account for the injection of energetic particles onto open magnetic flux tubes connecting to the Earth. Our model is based on the well-known 2.5D breakout topology, which has a coronal null point (null line) and a four-flux system. A key new addition, however, is that we include an isothermal solar wind with open-flux regions. Depending on the location of the open flux with respect to the null point, we find that the flare reconnection can consist of two distinct phases. At first, the flare reconnection involves only closed field, but if the eruption occurs close to the open field, we find a second phase involving interchange reconnection between open and closed. We argue that this second reconnection episode is responsible for the injection of flare-accelerated particles into the interplanetary medium. We will report on our recent work toward understanding how flare particles escape to the heliosphere. This work uses high-resolution 2.5D MHD numerical simulations performed with the Adaptively Refined MHD Solver (ARMS).
Case Markers in Mongolian: A Means for Encoding Null Constituents in Noun Phrase and Relative Clause
ERIC Educational Resources Information Center
Otgonsuren, Tseden
2017-01-01
This paper focuses on the capacity of the case markers in the Mongolian language, as a relative element, to generate any finite noun phrase or relative clause based on their syntactic function or relationship. In Mongolian, there are two different approaches to generate noun phrases: parataxis and hypotaxis. According to my early observation, if…
A trait-based test for habitat filtering: Convex hull volume
Cornwell, W.K.; Schwilk, D.W.; Ackerly, D.D.
2006-01-01
Community assembly theory suggests that two processes affect the distribution of trait values within communities: competition and habitat filtering. Within a local community, competition leads to ecological differentiation of coexisting species, while habitat filtering reduces the spread of trait values, reflecting shared ecological tolerances. Many statistical tests for the effects of competition exist in the literature, but measures of habitat filtering are less well-developed. Here, we present convex hull volume, a construct from computational geometry, which provides an n-dimensional measure of the volume of trait space occupied by species in a community. Combined with ecological null models, this measure offers a useful test for habitat filtering. We use convex hull volume and a null model to analyze California woody-plant trait and community data. Our results show that observed plant communities occupy less trait space than expected from random assembly, a result consistent with habitat filtering. ?? 2006 by the Ecological Society of America.
Slow-Mode MHD Wave Penetration into a Coronal Null Point due to the Mode Transmission
NASA Astrophysics Data System (ADS)
Afanasyev, Andrey N.; Uralov, Arkadiy M.
2016-11-01
Recent observations of magnetohydrodynamic oscillations and waves in solar active regions revealed their close link to quasi-periodic pulsations in flaring light curves. The nature of that link has not yet been understood in detail. In our analytical modelling we investigate propagation of slow magnetoacoustic waves in a solar active region, taking into account wave refraction and transmission of the slow magnetoacoustic mode into the fast one. The wave propagation is analysed in the geometrical acoustics approximation. Special attention is paid to the penetration of waves in the vicinity of a magnetic null point. The modelling has shown that the interaction of slow magnetoacoustic waves with the magnetic reconnection site is possible due to the mode transmission at the equipartition level where the sound speed is equal to the Alfvén speed. The efficiency of the transmission is also calculated.
A spatiotemporal analysis of U.S. station temperature trends over the last century
NASA Astrophysics Data System (ADS)
Capparelli, V.; Franzke, C.; Vecchio, A.; Freeman, M. P.; Watkins, N. W.; Carbone, V.
2013-07-01
This study presents a nonlinear spatiotemporal analysis of 1167 station temperature records from the United States Historical Climatology Network covering the period from 1898 through 2008. We use the empirical mode decomposition method to extract the generally nonlinear trends of each station. The statistical significance of each trend is assessed against three null models of the background climate variability, represented by stochastic processes of increasing temporal correlation length. We find strong evidence that more than 50% of all stations experienced a significant trend over the last century with respect to all three null models. A spatiotemporal analysis reveals a significant cooling trend in the South-East and significant warming trends in the rest of the contiguous U.S. It also shows that the warming trend appears to have migrated equatorward. This shows the complex spatiotemporal evolution of climate change at local scales.
Specifications for Managed Strings, Second Edition
2010-05-01
const char * cstr , const size_t maxsize, const char *charset); 10 | CMU/SEI-2010-TR-018 Runtime-Constraints s shall not be a null pointer...strcreate_m function creates a managed string, referenced by s, given a conventional string cstr (which may be null or empty). maxsize specifies the...characters to those in the null-terminated byte string cstr (which may be empty). If charset is a null pointer, no restricted character set is defined. If
Wormholes minimally violating the null energy condition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouhmadi-López, Mariam; Lobo, Francisco S N; Martín-Moruno, Prado, E-mail: mariam.bouhmadi@ehu.es, E-mail: fslobo@fc.ul.pt, E-mail: pmmoruno@fc.ul.pt
2014-11-01
We consider novel wormhole solutions supported by a matter content that minimally violates the null energy condition. More specifically, we consider an equation of state in which the sum of the energy density and radial pressure is proportional to a constant with a value smaller than that of the inverse area characterising the system, i.e., the area of the wormhole mouth. This approach is motivated by a recently proposed cosmological event, denoted {sup t}he little sibling of the big rip{sup ,} where the Hubble rate and the scale factor blow up but the cosmic derivative of the Hubble rate doesmore » not [1]. By using the cut-and-paste approach, we match interior spherically symmetric wormhole solutions to an exterior Schwarzschild geometry, and analyse the stability of the thin-shell to linearized spherically symmetric perturbations around static solutions, by choosing suitable properties for the exotic material residing on the junction interface radius. Furthermore, we also consider an inhomogeneous generalization of the equation of state considered above and analyse the respective stability regions. In particular, we obtain a specific wormhole solution with an asymptotic behaviour corresponding to a global monopole.« less
An omnibus test for the global null hypothesis.
Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja
2018-01-01
Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.
Continuous development of current sheets near and away from magnetic nulls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Sanjay; Bhattacharyya, R.
2016-04-15
The presented computations compare the strength of current sheets which develop near and away from the magnetic nulls. To ensure the spontaneous generation of current sheets, the computations are performed congruently with Parker's magnetostatic theorem. The simulations evince current sheets near two dimensional and three dimensional magnetic nulls as well as away from them. An important finding of this work is in the demonstration of comparative scaling of peak current density with numerical resolution, for these different types of current sheets. The results document current sheets near two dimensional magnetic nulls to have larger strength while exhibiting a stronger scalingmore » than the current sheets close to three dimensional magnetic nulls or away from any magnetic null. The comparative scaling points to a scenario where the magnetic topology near a developing current sheet is important for energetics of the subsequent reconnection.« less
Light cone structure near null infinity of the Kerr metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai Shan; Shang Yu; Graduate School of Chinese Academy of Sciences, Beijing, 100080
2007-02-15
Motivated by our attempt to understand the question of angular momentum of a relativistic rotating source carried away by gravitational waves, in the asymptotic regime near future null infinity of the Kerr metric, a family of null hypersurfaces intersecting null infinity in shearfree (good) cuts are constructed by means of asymptotic expansion of the eikonal equation. The geometry of the null hypersurfaces as well as the asymptotic structure of the Kerr metric near null infinity are studied. To the lowest order in angular momentum, the Bondi-Sachs form of the Kerr metric is worked out. The Newman-Unti formalism is then furthermore » developed, with which the Newman-Penrose constants of the Kerr metric are computed and shown to be zero. Possible physical implications of the vanishing of the Newman-Penrose constants of the Kerr metric are also briefly discussed.« less
Bopp-Podolsky black holes and the no-hair theorem
NASA Astrophysics Data System (ADS)
Cuzinatto, R. R.; de Melo, C. A. M.; Medeiros, L. G.; Pimentel, B. M.; Pompeia, P. J.
2018-01-01
Bopp-Podolsky electrodynamics is generalized to curved space-times. The equations of motion are written for the case of static spherically symmetric black holes and their exterior solutions are analyzed using Bekenstein's method. It is shown that the solutions split up into two parts, namely a non-homogeneous (asymptotically massless) regime and a homogeneous (asymptotically massive) sector which is null outside the event horizon. In addition, in the simplest approach to Bopp-Podolsky black holes, the non-homogeneous solutions are found to be Maxwell's solutions leading to a Reissner-Nordström black hole. It is also demonstrated that the only exterior solution consistent with the weak and null energy conditions is the Maxwell one. Thus, in the light of the energy conditions, it is concluded that only Maxwell modes propagate outside the horizon and, therefore, the no-hair theorem is satisfied in the case of Bopp-Podolsky fields in spherically symmetric space-times.
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
Mohanty, S; Jermyn, K A; Early, A; Kawata, T; Aubry, L; Ceccarelli, A; Schaap, P; Williams, J G; Firtel, R A
1999-08-01
Dd-STATa is a structural and functional homologue of the metazoan STAT (Signal Transducer and Activator of Transcription) proteins. We show that Dd-STATa null cells exhibit several distinct developmental phenotypes. The aggregation of Dd-STATa null cells is delayed and they chemotax slowly to a cyclic AMP source, suggesting a role for Dd-STATa in these early processes. In Dd-STATa null strains, slug-like structures are formed but they have an aberrant pattern of gene expression. In such slugs, ecmB/lacZ, a marker that is normally specific for cells on the stalk cell differentiation pathway, is expressed throughout the prestalk region. Stalk cell differentiation in Dictyostelium has been proposed to be under negative control, mediated by repressor elements present in the promoters of stalk cell-specific genes. Dd-STATa binds these repressor elements in vitro and the ectopic expression of ecmB/lacZ in the null strain provides in vivo evidence that Dd-STATa is the repressor protein that regulates commitment to stalk cell differentiation. Dd-STATa null cells display aberrant behavior in a monolayer assay wherein stalk cell differentiation is induced using the stalk cell morphogen DIF. The ecmB gene, a general marker for stalk cell differentiation, is greatly overinduced by DIF in Dd-STATa null cells. Also, Dd-STATa null cells are hypersensitive to DIF for expression of ST/lacZ, a marker for the earliest stages in the differentiation of one of the stalk cell sub-types. We suggest that both these manifestations of DIF hypersensitivity in the null strain result from the balance between activation and repression of the promoter elements being tipped in favor of activation when the repressor is absent. Paradoxically, although Dd-STATa null cells are hypersensitive to the inducing effects of DIF and readily form stalk cells in monolayer assay, the Dd-STATa null cells show little or no terminal stalk cell differentiation within the slug. Dd-STATa null slugs remain developmentally arrested for several days before forming very small spore masses supported by a column of apparently undifferentiated cells. Thus, complete stalk cell differentiation appears to require at least two events: a commitment step, whereby the repression exerted by Dd-STATa is lifted, and a second step that is blocked in a Dd-STATa null organism. This latter step may involve extracellular cAMP, a known repressor of stalk cell differentiation, because Dd-STATa null cells are abnormally sensitive to the inhibitory effects of extracellular cyclic AMP.
Cyclin A2 promotes DNA repair in the brain during both development and aging.
Gygli, Patrick E; Chang, Joshua C; Gokozan, Hamza N; Catacutan, Fay P; Schmidt, Theresa A; Kaya, Behiye; Goksel, Mustafa; Baig, Faisal S; Chen, Shannon; Griveau, Amelie; Michowski, Wojciech; Wong, Michael; Palanichamy, Kamalakannan; Sicinski, Piotr; Nelson, Randy J; Czeisler, Catherine; Otero, José J
2016-07-01
Various stem cell niches of the brain have differential requirements for Cyclin A2. Cyclin A2 loss results in marked cerebellar dysmorphia, whereas forebrain growth is retarded during early embryonic development yet achieves normal size at birth. To understand the differential requirements of distinct brain regions for Cyclin A2, we utilized neuroanatomical, transgenic mouse, and mathematical modeling techniques to generate testable hypotheses that provide insight into how Cyclin A2 loss results in compensatory forebrain growth during late embryonic development. Using unbiased measurements of the forebrain stem cell niche, we parameterized a mathematical model whereby logistic growth instructs progenitor cells as to the cell-types of their progeny. Our data was consistent with prior findings that progenitors proliferate along an auto-inhibitory growth curve. The growth retardation inCCNA2-null brains corresponded to cell cycle lengthening, imposing a developmental delay. We hypothesized that Cyclin A2 regulates DNA repair and that CCNA2-null progenitors thus experienced lengthened cell cycle. We demonstrate that CCNA2-null progenitors suffer abnormal DNA repair, and implicate Cyclin A2 in double-strand break repair. Cyclin A2's DNA repair functions are conserved among cell lines, neural progenitors, and hippocampal neurons. We further demonstrate that neuronal CCNA2 ablation results in learning and memory deficits in aged mice.
Loss of ATM kinase activity leads to embryonic lethality in mice.
Daniel, Jeremy A; Pellegrini, Manuela; Lee, Baeck-Seung; Guo, Zhi; Filsuf, Darius; Belkina, Natalya V; You, Zhongsheng; Paull, Tanya T; Sleckman, Barry P; Feigenbaum, Lionel; Nussenzweig, André
2012-08-06
Ataxia telangiectasia (A-T) mutated (ATM) is a key deoxyribonucleic acid (DNA) damage signaling kinase that regulates DNA repair, cell cycle checkpoints, and apoptosis. The majority of patients with A-T, a cancer-prone neurodegenerative disease, present with null mutations in Atm. To determine whether the functions of ATM are mediated solely by its kinase activity, we generated two mouse models containing single, catalytically inactivating point mutations in Atm. In this paper, we show that, in contrast to Atm-null mice, both D2899A and Q2740P mutations cause early embryonic lethality in mice, without displaying dominant-negative interfering activity. Using conditional deletion, we find that the D2899A mutation in adult mice behaves largely similar to Atm-null cells but shows greater deficiency in homologous recombination (HR) as measured by hypersensitivity to poly (adenosine diphosphate-ribose) polymerase inhibition and increased genomic instability. These results may explain why missense mutations with no detectable kinase activity are rarely found in patients with classical A-T. We propose that ATM kinase-inactive missense mutations, unless otherwise compensated for, interfere with HR during embryogenesis.
Jumping the energetics queue: Modulation of pulsar signals by extraterrestrial civilizations
NASA Astrophysics Data System (ADS)
Chennamangalam, Jayanth; Siemion, Andrew P. V.; Lorimer, D. R.; Werthimer, Dan
2015-01-01
It has been speculated that technological civilizations evolve along an energy consumption scale first formulated by Kardashev, ranging from human-like civilizations that consume energy at a rate of ∼1019 erg s-1 to hypothetical highly advanced civilizations that can consume ∼1044 erg s-1. Since the transmission power of a beacon a civilization can build depends on the energy it possesses, to make it bright enough to be seen across the Galaxy would require high technological advancement. In this paper, we discuss the possibility of a civilization using naturally-occurring radio transmitters - specifically, radio pulsars - to overcome the Kardashev limit of their developmental stage and transmit super-Kardashev power. This is achieved by the use of a modulator situated around a pulsar, that modulates the pulsar signal, encoding information onto its natural emission. We discuss a simple modulation model using pulse nulling and considerations for detecting such a signal. We find that a pulsar with a nulling modulator will exhibit an excess of thermal emission peaking in the ultraviolet during its null phases, revealing the existence of a modulator.
High heat flux measurements and experimental calibrations/characterizations
NASA Technical Reports Server (NTRS)
Kidd, Carl T.
1992-01-01
Recent progress in techniques employed in the measurement of very high heat-transfer rates in reentry-type facilities at the Arnold Engineering Development Center (AEDC) is described. These advances include thermal analyses applied to transducer concepts used to make these measurements; improved heat-flux sensor fabrication methods, equipment, and procedures for determining the experimental time response of individual sensors; performance of absolute heat-flux calibrations at levels above 2,000 Btu/cu ft-sec (2.27 kW/cu cm); and innovative methods of performing in-situ run-to-run characterizations of heat-flux probes installed in the test facility. Graphical illustrations of the results of extensive thermal analyses of the null-point calorimeter and coaxial surface thermocouple concepts with application to measurements in aerothermal test environments are presented. Results of time response experiments and absolute calibrations of null-point calorimeters and coaxial thermocouples performed in the laboratory at intermediate to high heat-flux levels are shown. Typical AEDC high-enthalpy arc heater heat-flux data recently obtained with a Calspan-fabricated null-point probe model are included.
Ryutov, D. D.; Soukhanovskii, V. A.
2015-11-17
The snowflake magnetic configuration is characterized by the presence of two closely spaced poloidal field nulls that create a characteristic hexagonal (reminiscent of a snowflake) separatrix structure. The magnetic field properties and the plasma behaviour in the snowflake are determined by the simultaneous action of both nulls, this generating a lot of interesting physics, as well as providing a chance for improving divertor performance. One of the most interesting effects of the snowflake geometry is the heat flux sharing between multiple divertor channels. The authors summarise experimental results obtained with the snowflake configuration on several tokamaks. Wherever possible, relation tomore » the existing theoretical models is described. Divertor concepts utilizing the properties of a snowflake configuration are briefly discussed.« less
Tests of Hypotheses Arising In the Correlated Random Coefficient Model*
Heckman, James J.; Schmierer, Daniel
2010-01-01
This paper examines the correlated random coefficient model. It extends the analysis of Swamy (1971), who pioneered the uncorrelated random coefficient model in economics. We develop the properties of the correlated random coefficient model and derive a new representation of the variance of the instrumental variable estimator for that model. We develop tests of the validity of the correlated random coefficient model against the null hypothesis of the uncorrelated random coefficient model. PMID:21170148
2017-01-01
The consequences of selection at linked sites are multiple and widespread across the genomes of most species. Here, I first review the main concepts behind models of selection and linkage in recombining genomes, present the difficulty in parametrizing these models simply as a reduction in effective population size (Ne) and discuss the predicted impact of recombination rates on levels of diversity across genomes. Arguments are then put forward in favour of using a model of selection and linkage with neutral and deleterious mutations (i.e. the background selection model, BGS) as a sensible null hypothesis for investigating the presence of other forms of selection, such as balancing or positive. I also describe and compare two studies that have generated high-resolution landscapes of the predicted consequences of selection at linked sites in Drosophila melanogaster. Both studies show that BGS can explain a very large fraction of the observed variation in diversity across the whole genome, thus supporting its use as null model. Finally, I identify and discuss a number of caveats and challenges in studies of genetic hitchhiking that have been often overlooked, with several of them sharing a potential bias towards overestimating the evidence supporting recent selective sweeps to the detriment of a BGS explanation. One potential source of bias is the analysis of non-equilibrium populations: it is precisely because models of selection and linkage predict variation in Ne across chromosomes that demographic dynamics are not expected to be equivalent chromosome- or genome-wide. Other challenges include the use of incomplete genome annotations, the assumption of temporally stable recombination landscapes, the presence of genes under balancing selection and the consequences of ignoring non-crossover (gene conversion) recombination events. This article is part of the themed issue ‘Evolutionary causes and consequences of recombination rate variation in sexual organisms’. PMID:29109230
Zagorski, Joseph W; Maser, Tyler P; Liby, Karen T; Rockwell, Cheryl E
2017-05-01
Nuclear factor erythroid 2-related factor 2 (Nrf2) is a stress-activated transcription factor activated by stimuli such as electrophilic compounds and other reactive xenobiotics. Previously, we have shown that the commonly used food additive and Nrf2 activator tert -butylhydroquinone (tBHQ) suppresses interleukin-2 (IL-2) production, CD25 expression, and NF κ B activity in human Jurkat T cells. The purpose of the current studies was to determine whether these effects were dependent upon Nrf2 by developing a human Nrf2-null T cell model using clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 technology. The current studies show that suppression of CD25 expression by tBHQ is partially dependent on Nrf2, whereas inhibition of IL-2 secretion is largely Nrf2-independent. Interestingly, tBHQ inhibited NF κ B activation in an Nrf2-independent manner. This was an unexpected finding since Nrf2 inhibits NF κ B activation in other models. These results led us to investigate another more potent Nrf2 activator, the synthetic triterpenoid 1[2-cyano-3,12-dioxooleana-1,9(11)-dien-28-oyl]imidazole (CDDO-Im). Treatment of wild-type and Nrf2-null Jurkat T cells with CDDO-Im resulted in an Nrf2-dependent suppression of IL-2. Furthermore, susceptibility to reactive oxygen species was significantly enhanced in the Nrf2-null clones as determined by decreased mitochondrial membrane potential and cell viability. Importantly, this study is the first to describe the generation of a human Nrf2-null model, which is likely to have multiple applications in immunology and cancer biology. Collectively, this study demonstrates a role for Nrf2 in the effects of CDDO-Im on CD25 and IL-2 expression, whereas the effect of tBHQ on these parameters is complex and likely involves modulation of multiple stress-activated transcription factors, including NF κ B and Nrf2. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.
Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.
Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter
2015-12-01
Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments. © The Author(s) 2011.
Stanton, M. Mark; Nelson, Lisa K.; Benediktsson, Hallgrimur; Hollenberg, Morley D.; Buret, Andre G.; Ceri, Howard
2013-01-01
Background. Nonbacterial prostatitis has no established etiology. We hypothesized that proteinase-activated receptor-1 (PAR1) can play a role in prostatitis. We therefore investigated the effects of PAR1 stimulation in the context of a new model of murine nonbacterial prostatitis. Methods. Using a hapten (ethanol-dinitrobenzene sulfonic acid- (DNBS-)) induced prostatitis model with both wild-type and PAR1-null mice, we examined (1) the location of PAR1 in the mouse prostate and (2) the impact of a PAR1-activating peptide (TFLLR-NH2: PAR1-TF) on ethanol-DNBS-induced inflammation. Results. Ethanol-DNBS-induced inflammation was maximal at 2 days. In the tissue, PAR1 was expressed predominantly along the apical acini of prostatic epithelium. Although PAR1-TF on its own did not cause inflammation, its coadministration with ethanol-DNBS reduced all indices of acute prostatitis. Further, PAR1-TF administration doubled the prostatic production of interleukin-10 (IL-10) compared with ethanol-DNBS treatment alone. This enhanced IL-10 was not observed in PAR1-null mice and was not caused by the reverse-sequence receptor-inactive peptide, RLLFT-NH2. Surprisingly, PAR1-TF, also diminished ethanol-DNBS-induced inflammation in PAR1-null mice. Conclusions. PAR1 is expressed in the mouse prostate and its activation by PAR1-TF elicits immunomodulatory effects during ethanol-DNBS-induced prostatitis. However, PAR1-TF also diminishes ethanol-DNBS-induced inflammation via a non-PAR1 mechanism by activating an as-yet unknown receptor. PMID:24459330
[Predictive model based multimetric index of macroinvertebrates for river health assessment].
Chen, Kai; Yu, Hai Yan; Zhang, Ji Wei; Wang, Bei Xin; Chen, Qiu Wen
2017-06-18
Improving the stability of integrity of biotic index (IBI; i.e., multi-metric indices, MMI) across temporal and spatial scales is one of the most important issues in water ecosystem integrity bioassessment and water environment management. Using datasets of field-based macroinvertebrate and physicochemical variables and GIS-based natural predictors (e.g., geomorphology and climate) and land use variables collected at 227 river sites from 2004 to 2011 across the Zhejiang Province, China, we used random forests (RF) to adjust the effects of natural variations at temporal and spatial scales on macroinvertebrate metrics. We then developed natural variations adjusted (predictive) and unadjusted (null) MMIs and compared performance between them. The core me-trics selected for predictive and null MMIs were different from each other, and natural variations within core metrics in predictive MMI explained by RF models ranged between 11.4% and 61.2%. The predictive MMI was more precise and accurate, but less responsive and sensitive than null MMI. The multivariate nearest-neighbor test determined that 9 test sites and 1 most degraded site were flagged outside of the environmental space of the reference site network. We found that combination of predictive MMI developed by using predictive model and the nearest-neighbor test performed best and decreased risks of inferring type I (designating a water body as being in poor biological condition, when it was actually in good condition) and type II (designating a water body as being in good biological condition, when it was actually in poor condition) errors. Our results provided an effective method to improve the stability and performance of integrity of biotic index.