Sample records for measures general linear

  1. Implementing general quantum measurements on linear optical and solid-state qubits

    NASA Astrophysics Data System (ADS)

    Ota, Yukihiro; Ashhab, Sahel; Nori, Franco

    2013-03-01

    We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.

  2. A General Linear Model Approach to Adjusting the Cumulative GPA.

    ERIC Educational Resources Information Center

    Young, John W.

    A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…

  3. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  4. Information Fusion from the Point of View of Communication Theory; Fusing Information to Trade-Off the Resolution of Assessments Against the Probability of Mis-Assessment

    DTIC Science & Technology

    2013-08-19

    excellence in linear models , 2010. She successfully defended her dissertation, Linear System Design for Fusion and Compression, on Aug 13, 2013. Her work was...measurements into canonical coordinates, scaling, and rotation; there is a water-filling interpretation; (3) the optimum design of a linear secondary channel of...measurements to fuse with a primary linear channel of measurements maximizes a generalized Rayleigh quotient; (4) the asymptotically optimum

  5. Generalized concurrence in boson sampling.

    PubMed

    Chin, Seungbeom; Huh, Joonsuk

    2018-04-17

    A fundamental question in linear optical quantum computing is to understand the origin of the quantum supremacy in the physical system. It is found that the multimode linear optical transition amplitudes are calculated through the permanents of transition operator matrices, which is a hard problem for classical simulations (boson sampling problem). We can understand this problem by considering a quantum measure that directly determines the runtime for computing the transition amplitudes. In this paper, we suggest a quantum measure named "Fock state concurrence sum" C S , which is the summation over all the members of "the generalized Fock state concurrence" (a measure analogous to the generalized concurrences of entanglement and coherence). By introducing generalized algorithms for computing the transition amplitudes of the Fock state boson sampling with an arbitrary number of photons per mode, we show that the minimal classical runtime for all the known algorithms directly depends on C S . Therefore, we can state that the Fock state concurrence sum C S behaves as a collective measure that controls the computational complexity of Fock state BS. We expect that our observation on the role of the Fock state concurrence in the generalized algorithm for permanents would provide a unified viewpoint to interpret the quantum computing power of linear optics.

  6. Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process

    NASA Astrophysics Data System (ADS)

    Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas

    2018-05-01

    This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.

  7. A Linear Variable-[theta] Model for Measuring Individual Differences in Response Precision

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2011-01-01

    Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…

  8. New evidence and impact of electron transport non-linearities based on new perturbative inter-modulation analysis

    NASA Astrophysics Data System (ADS)

    van Berkel, M.; Kobayashi, T.; Igami, H.; Vandersteen, G.; Hogeweij, G. M. D.; Tanaka, K.; Tamura, N.; Zwart, H. J.; Kubo, S.; Ito, S.; Tsuchiya, H.; de Baar, M. R.; LHD Experiment Group

    2017-12-01

    A new methodology to analyze non-linear components in perturbative transport experiments is introduced. The methodology has been experimentally validated in the Large Helical Device for the electron heat transport channel. Electron cyclotron resonance heating with different modulation frequencies by two gyrotrons has been used to directly quantify the amplitude of the non-linear component at the inter-modulation frequencies. The measurements show significant quadratic non-linear contributions and also the absence of cubic and higher order components. The non-linear component is analyzed using the Volterra series, which is the non-linear generalization of transfer functions. This allows us to study the radial distribution of the non-linearity of the plasma and to reconstruct linear profiles where the measurements were not distorted by non-linearities. The reconstructed linear profiles are significantly different from the measured profiles, demonstrating the significant impact that non-linearity can have.

  9. Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.

    PubMed

    Trninić, Marko; Jeličić, Mario; Papić, Vladan

    2015-07-01

    In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.

  10. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  11. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  12. Linear and Nonlinear Thinking: A Multidimensional Model and Measure

    ERIC Educational Resources Information Center

    Groves, Kevin S.; Vance, Charles M.

    2015-01-01

    Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…

  13. Mediation analysis when a continuous mediator is measured with error and the outcome follows a generalized linear model

    PubMed Central

    Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.

    2014-01-01

    Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625

  14. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  15. General job stress: a unidimensional measure and its non-linear relations with outcome variables.

    PubMed

    Yankelevich, Maya; Broadfoot, Alison; Gillespie, Jennifer Z; Gillespie, Michael A; Guidroz, Ashley

    2012-04-01

    This article aims to examine the non-linear relations between a general measure of job stress [Stress in General (SIG)] and two outcome variables: intentions to quit and job satisfaction. In so doing, we also re-examine the factor structure of the SIG and determine that, as a two-factor scale, it obscures non-linear relations with outcomes. Thus, in this research, we not only test for non-linear relations between stress and outcome variables but also present an updated version of the SIG scale. Using two distinct samples of working adults (sample 1, N = 589; sample 2, N = 4322), results indicate that a more parsimonious eight-item SIG has better model-data fit than the 15-item two-factor SIG and that the eight-item SIG has non-linear relations with job satisfaction and intentions to quit. Specifically, the revised SIG has an inverted curvilinear J-shaped relation with job satisfaction such that job satisfaction drops precipitously after a certain level of stress; the SIG has a J-shaped curvilinear relation with intentions to quit such that turnover intentions increase exponentially after a certain level of stress. Copyright © 2011 John Wiley & Sons, Ltd.

  16. Arbitrarily Complete Bell-State Measurement Using only Linear Optical Elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grice, Warren P

    2011-01-01

    A complete Bell-state measurement is not possible using only linear-optic elements, and most schemes achieve a success rate of no more than 50%, distinguishing, for example, two of the four Bell states but returning degenerate results for the other two. It is shown here that the introduction of a pair of ancillary entangled photons improves the success rate to 75%. More generally, the addition of 2{sup N}-2 ancillary photons yields a linear-optic Bell-state measurement with a success rate of 1-1/2{sup N}.

  17. Measures and models for angular correlation and angular-linear correlation. [correlation of random variables

    NASA Technical Reports Server (NTRS)

    Johnson, R. A.; Wehrly, T.

    1976-01-01

    Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.

  18. Feature extraction with deep neural networks by a generalized discriminant analysis.

    PubMed

    Stuhlsatz, André; Lippel, Jens; Zielke, Thomas

    2012-04-01

    We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.

  19. [Analysis of binary classification repeated measurement data with GEE and GLMMs using SPSS software].

    PubMed

    An, Shengli; Zhang, Yanhong; Chen, Zheng

    2012-12-01

    To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.

  20. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  1. The Development of Web-based Graphical User Interface for Unified Modeling Data with Multi (Correlated) Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian

    2018-04-01

    Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.

  2. Uncertainty based pressure reconstruction from velocity measurement with generalized least squares

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos

    2017-11-01

    A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.

  3. Modeling Learning in Doubly Multilevel Binary Longitudinal Data Using Generalized Linear Mixed Models: An Application to Measuring and Explaining Word Learning.

    PubMed

    Cho, Sun-Joo; Goodwin, Amanda P

    2016-04-01

    When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.

  4. Finite-time H∞ filtering for non-linear stochastic systems

    NASA Astrophysics Data System (ADS)

    Hou, Mingzhe; Deng, Zongquan; Duan, Guangren

    2016-09-01

    This paper describes the robust H∞ filtering analysis and the synthesis of general non-linear stochastic systems with finite settling time. We assume that the system dynamic is modelled by Itô-type stochastic differential equations of which the state and the measurement are corrupted by state-dependent noises and exogenous disturbances. A sufficient condition for non-linear stochastic systems to have the finite-time H∞ performance with gain less than or equal to a prescribed positive number is established in terms of a certain Hamilton-Jacobi inequality. Based on this result, the existence of a finite-time H∞ filter is given for the general non-linear stochastic system by a second-order non-linear partial differential inequality, and the filter can be obtained by solving this inequality. The effectiveness of the obtained result is illustrated by a numerical example.

  5. The Association of Health and Income in the Elderly: Experience from a Southern State of Brazil

    PubMed Central

    Fillenbaum, Gerda G.; Blay, Sergio L.; Pieper, Carl F.; King, Katherine E.; Andreoli, Sergio B.; Gastal, Fábio L.

    2013-01-01

    Objectives In high income, developed countries, health status tends to improve as income increases, but primarily through the 50th-66th percentile of income. It is unclear whether the same limitation holds in middle income countries, and for both general assessments of health and specific conditions. Methods Data were obtained from Brazil, a middle income country. In-person interviews with a representative sample of community residents age ≥60 (N=6963), in the southern state of Rio Grande do Sul, obtained information on demographic characteristics including household income and number of persons supported, general health status (self-rated health, functional status), depression, and seven physician-diagnosed, self-reported health conditions. Analyses used household income (adjusted for number supported and economies of scale) together with higher order income terms, and controlled for demographics and comorbidities, to ascertain nonlinearity between income and general and specific health measures. Results In fully controlled analyses income was associated with general measures of health (linearly with self-rated health, nonlinearly with functional status). For specific health measures there was a consistent linear association with depression, pulmonary disorders, renal disorders, and sensory impairment. For musculoskeletal, cardiovascular (negative association), and gastrointestinal disorders this association no longer held when comorbidities were controlled. There was no association with diabetes. Conclusion Contrary to findings in high income countries, the association of household-size-adjusted income with health was generally linear, sometimes negative, and sometimes absent when comorbidities were controlled. PMID:24058505

  6. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  7. Nonparametric triple collocation

    USDA-ARS?s Scientific Manuscript database

    Triple collocation derives variance-covariance relationships between three or more independent measurement sources and an indirectly observed truth variable in the case where the measurement operators are linear-Gaussian. We generalize that theory to arbitrary observation operators by deriving nonpa...

  8. Using General Outcome Measures to Predict Student Performance on State-Mandated Assessments: An Applied Approach for Establishing Predictive Cutscores

    ERIC Educational Resources Information Center

    Leblanc, Michael; Dufore, Emily; McDougal, James

    2012-01-01

    Cutscores for reading and math (general outcome measures) to predict passage on New York state-mandated assessments were created by using a freely available Excel workbook. The authors used linear regression to create the cutscores and diagnostic indicators were provided. A rationale and procedure for using this method is outlined. This method…

  9. Generalized two-dimensional (2D) linear system analysis metrics (GMTF, GDQE) for digital radiography systems including the effect of focal spot, magnification, scatter, and detector characteristics.

    PubMed

    Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen

    2010-03-01

    The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.

  10. Local influence for generalized linear models with missing covariates.

    PubMed

    Shi, Xiaoyan; Zhu, Hongtu; Ibrahim, Joseph G

    2009-12-01

    In the analysis of missing data, sensitivity analyses are commonly used to check the sensitivity of the parameters of interest with respect to the missing data mechanism and other distributional and modeling assumptions. In this article, we formally develop a general local influence method to carry out sensitivity analyses of minor perturbations to generalized linear models in the presence of missing covariate data. We examine two types of perturbation schemes (the single-case and global perturbation schemes) for perturbing various assumptions in this setting. We show that the metric tensor of a perturbation manifold provides useful information for selecting an appropriate perturbation. We also develop several local influence measures to identify influential points and test model misspecification. Simulation studies are conducted to evaluate our methods, and real datasets are analyzed to illustrate the use of our local influence measures.

  11. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Typical Werner states satisfying all linear Bell inequalities with dichotomic measurements

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing

    2018-04-01

    Quantum entanglement as a special resource inspires various distinct applications in quantum information processing. Unfortunately, it is NP-hard to detect general quantum entanglement using Bell testing. Our goal is to investigate quantum entanglement with white noises that appear frequently in experiment and quantum simulations. Surprisingly, for almost all multipartite generalized Greenberger-Horne-Zeilinger states there are entangled noisy states that satisfy all linear Bell inequalities consisting of full correlations with dichotomic inputs and outputs of each local observer. This result shows generic undetectability of mixed entangled states in contrast to Gisin's theorem of pure bipartite entangled states in terms of Bell nonlocality. We further provide an accessible method to show a nontrivial set of noisy entanglement with small number of parties satisfying all general linear Bell inequalities. These results imply typical incompleteness of special Bell theory in explaining entanglement.

  13. A study of methods to predict and measure the transmission of sound through the walls of light aircraft. Integration of certain singular boundary element integrals for applications in linear acoustics

    NASA Technical Reports Server (NTRS)

    Zimmerle, D.; Bernhard, R. J.

    1985-01-01

    An alternative method for performing singular boundary element integrals for applications in linear acoustics is discussed. The method separates the integral of the characteristic solution into a singular and nonsingular part. The singular portion is integrated with a combination of analytic and numerical techniques while the nonsingular portion is integrated with standard Gaussian quadrature. The method may be generalized to many types of subparametric elements. The integrals over elements containing the root node are considered, and the characteristic solution for linear acoustic problems are examined. The method may be generalized to most characteristic solutions.

  14. Commentary on the statistical properties of noise and its implication on general linear models in functional near-infrared spectroscopy.

    PubMed

    Huppert, Theodore J

    2016-01-01

    Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grice, W. P.

    A complete Bell-state measurement is not possible using only linear-optic elements, and most schemes achieve a success rate of no more than 50%, distinguishing, for example, two of the four Bell states but returning degenerate results for the other two. It is shown here that the introduction of a pair of ancillary entangled photons improves the success rate to 75%. More generally, the addition of 2{sup N}-2 ancillary photons yields a linear-optic Bell-state measurement with a success rate of 1-1/2{sup N}.

  16. [Analysis of variance of repeated data measured by water maze with SPSS].

    PubMed

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P

  17. Some New Properties of Quantum Correlations

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Li, Fei; Wei, Yunxia

    2017-02-01

    Quantum coherence measures the correlation between different measurement results in a single-system, while entanglement and quantum discord measure the correlation among different subsystems in a multipartite system. In this paper, we focus on the relative entropy form of them, and obtain three new properties of them as follows: 1) General forms of maximally coherent states for the relative entropy coherence, 2) Linear monogamy of the relative entropy entanglement, and 3) Subadditivity of quantum discord. Here, the linear monogamy is defined as there is a small constant as the upper bound on the sum of the relative entropy entanglement in subsystems.

  18. Estimating mutual information using B-spline functions – an improved similarity measure for analysing gene expression data

    PubMed Central

    Daub, Carsten O; Steuer, Ralf; Selbig, Joachim; Kloska, Sebastian

    2004-01-01

    Background The information theoretic concept of mutual information provides a general framework to evaluate dependencies between variables. In the context of the clustering of genes with similar patterns of expression it has been suggested as a general quantity of similarity to extend commonly used linear measures. Since mutual information is defined in terms of discrete variables, its application to continuous data requires the use of binning procedures, which can lead to significant numerical errors for datasets of small or moderate size. Results In this work, we propose a method for the numerical estimation of mutual information from continuous data. We investigate the characteristic properties arising from the application of our algorithm and show that our approach outperforms commonly used algorithms: The significance, as a measure of the power of distinction from random correlation, is significantly increased. This concept is subsequently illustrated on two large-scale gene expression datasets and the results are compared to those obtained using other similarity measures. A C++ source code of our algorithm is available for non-commercial use from kloska@scienion.de upon request. Conclusion The utilisation of mutual information as similarity measure enables the detection of non-linear correlations in gene expression datasets. Frequently applied linear correlation measures, which are often used on an ad-hoc basis without further justification, are thereby extended. PMID:15339346

  19. Validating the applicability of the GUM procedure

    NASA Astrophysics Data System (ADS)

    Cox, Maurice G.; Harris, Peter M.

    2014-08-01

    This paper is directed at practitioners seeking a degree of assurance in the quality of the results of an uncertainty evaluation when using the procedure in the Guide to the Expression of Uncertainty in Measurement (GUM) (JCGM 100 : 2008). Such assurance is required in adhering to general standards such as International Standard ISO/IEC 17025 or other sector-specific standards. We investigate the extent to which such assurance can be given. For many practical cases, a measurement result incorporating an evaluated uncertainty that is correct to one significant decimal digit would be acceptable. Any quantification of the numerical precision of an uncertainty statement is naturally relative to the adequacy of the measurement model and the knowledge used of the quantities in that model. For general univariate and multivariate measurement models, we emphasize the use of a Monte Carlo method, as recommended in GUM Supplements 1 and 2. One use of this method is as a benchmark in terms of which measurement results provided by the GUM can be assessed in any particular instance. We mainly consider measurement models that are linear in the input quantities, or have been linearized and the linearization process is deemed to be adequate. When the probability distributions for those quantities are independent, we indicate the use of other approaches such as convolution methods based on the fast Fourier transform and, particularly, Chebyshev polynomials as benchmarks.

  20. Exact and near backscattering measurements of the linear depolarisation ratio of various ice crystal habits generated in a laboratory cloud chamber

    NASA Astrophysics Data System (ADS)

    Smith, Helen R.; Connolly, Paul J.; Webb, Ann R.; Baran, Anthony J.

    2016-07-01

    Ice clouds were generated in the Manchester Ice Cloud Chamber (MICC), and the backscattering linear depolarisation ratio, δ, was measured for a variety of habits. To create an assortment of particle morphologies, the humidity in the chamber was varied throughout each experiment, resulting in a range of habits from the pristine to the complex. This technique was repeated at three temperatures: -7 °C, -15 °C and -30 °C, in order to produce both solid and hollow columns, plates, sectored plates and dendrites. A linearly polarised 532 nm continuous wave diode laser was directed through a section of the cloud using a non-polarising 50:50 beam splitter. Measurements of the scattered light were taken at 178°, 179° and 180°, using a Glan-Taylor prism to separate the co- and cross-polarised components. The intensities of these components were measured using two amplified photodetectors and the ratio of the cross- to co-polarised intensities was measured to find the linear depolarisation ratio. In general, it was found that Ray Tracing over-predicts the linear depolarisation ratio. However, by creating more accurate particle models which better represent the internal structure of ice particles, discrepancies between measured and modelled results (based on Ray Tracing) were reduced.

  1. Probabilistic measurement of non-physical constructs during early childhood: Epistemological implications for advancing psychosocial science

    NASA Astrophysics Data System (ADS)

    Bezruczko, N.; Fatani, S. S.

    2010-07-01

    Social researchers commonly compute ordinal raw scores and ratings to quantify human aptitudes, attitudes, and abilities but without a clear understanding of their limitations for scientific knowledge. In this research, common ordinal measures were compared to higher order linear (equal interval) scale measures to clarify implications for objectivity, precision, ontological coherence, and meaningfulness. Raw score gains, residualized raw gains, and linear gains calculated with a Rasch model were compared between Time 1 and Time 2 for observations from two early childhood learning assessments. Comparisons show major inconsistencies between ratings and linear gains. When gain distribution was dense, relatively compact, and initial status near item mid-range, linear measures and ratings were indistinguishable. When Time 1 status was distributed more broadly and magnitude of change variable, ratings were unrelated to linear gain, which emphasizes problematic implications of ordinal measures. Surprisingly, residualized gain scores did not significantly improve ordinal measurement of change. In general, raw scores and ratings may be meaningful in specific samples to establish order and high/low rank, but raw score differences suffer from non-uniform units. Even meaningfulness of sample comparisons, as well as derived proportions and percentages, are seriously affected by rank order distortions and should be avoided.

  2. Predicting the Underwater Sound of Moderate and Heavy Rainfall from Laboratory Measurements of Radiation from Single Large Raindrops

    DTIC Science & Technology

    1992-03-01

    Elementary Linear Algebra with Applications, pp. 301- 323, John Wiley and Sons Inc., 1987. Atlas, D., and Ulbrich, C. E. W., "The Physical Basis for...vector drd In this case, the linear system is said to be inconsistent ( Anton and Rorres, 1987). In contrast, for an underdetermined system (where the...ocean acoustical tomography and seismology. In simplest terms, the general linear inverse problem consists of fimding the desired solution to a set of m

  3. On the Relationship between Maximal Reliability and Maximal Validity of Linear Composites

    ERIC Educational Resources Information Center

    Penev, Spiridon; Raykov, Tenko

    2006-01-01

    A linear combination of a set of measures is often sought as an overall score summarizing subject performance. The weights in this composite can be selected to maximize its reliability or to maximize its validity, and the optimal choice of weights is in general not the same for these two optimality criteria. We explore several relationships…

  4. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  5. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  6. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  7. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  8. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  9. Digital photography and transparency-based methods for measuring wound surface area.

    PubMed

    Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh

    2013-04-01

    To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.

  10. Neural network and multiple linear regression to predict school children dimensions for ergonomic school furniture design.

    PubMed

    Agha, Salah R; Alnahhal, Mohammed J

    2012-11-01

    The current study investigates the possibility of obtaining the anthropometric dimensions, critical to school furniture design, without measuring all of them. The study first selects some anthropometric dimensions that are easy to measure. Two methods are then used to check if these easy-to-measure dimensions can predict the dimensions critical to the furniture design. These methods are multiple linear regression and neural networks. Each dimension that is deemed necessary to ergonomically design school furniture is expressed as a function of some other measured anthropometric dimensions. Results show that out of the five dimensions needed for chair design, four can be related to other dimensions that can be measured while children are standing. Therefore, the method suggested here would definitely save time and effort and avoid the difficulty of dealing with students while measuring these dimensions. In general, it was found that neural networks perform better than multiple linear regression in the current study. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Measurement, Grades 4-6.

    ERIC Educational Resources Information Center

    Halton County Board of Education, Burlington (Ontario).

    This is a collection of mathematics laboratory activities related to the topics of linear and square measure. There are a number of experimental situations from which results may be generalized. Also included are worksheets, examples and discussion questions which are based on practical situations whenever possible. The materials are for student…

  12. Perfect commuting-operator strategies for linear system games

    NASA Astrophysics Data System (ADS)

    Cleve, Richard; Liu, Li; Slofstra, William

    2017-01-01

    Linear system games are a generalization of Mermin's magic square game introduced by Cleve and Mittal. They show that perfect strategies for linear system games in the tensor-product model of entanglement correspond to finite-dimensional operator solutions of a certain set of non-commutative equations. We investigate linear system games in the commuting-operator model of entanglement, where Alice and Bob's measurement operators act on a joint Hilbert space, and Alice's operators must commute with Bob's operators. We show that perfect strategies in this model correspond to possibly infinite-dimensional operator solutions of the non-commutative equations. The proof is based around a finitely presented group associated with the linear system which arises from the non-commutative equations.

  13. Characterization of Generalized Young Measures Generated by Symmetric Gradients

    NASA Astrophysics Data System (ADS)

    De Philippis, Guido; Rindler, Filip

    2017-06-01

    This work establishes a characterization theorem for (generalized) Young measures generated by symmetric derivatives of functions of bounded deformation (BD) in the spirit of the classical Kinderlehrer-Pedregal theorem. Our result places such Young measures in duality with symmetric-quasiconvex functions with linear growth. The "local" proof strategy combines blow-up arguments with the singular structure theorem in BD (the analogue of Alberti's rank-one theorem in BV), which was recently proved by the authors. As an application of our characterization theorem we show how an atomic part in a BD-Young measure can be split off in generating sequences.

  14. Measurement system analysis of viscometers used for drilling mud characterization

    NASA Astrophysics Data System (ADS)

    Mat-Shayuti, M. S.; Adzhar, S. N.

    2017-07-01

    Viscometers in the Faculty of Chemical Engineering, University Teknologi MARA, are subject to heavy utilization from the members of the faculty. Due to doubts surrounding their result integrity and maintenance management, Measurement System Analysis was executed. 5 samples of drilling muds with varied barite content from 5 - 25 weight% were prepared and their rheological properties determined in 3 trials by 3 operators using the viscometers. Gage Linearity and Bias Study were performed using Minitab software and the result shows high biases in the range of 19.2% to 38.7%, with non-linear trend along the span of measurements. Gage Repeatability & Reproducibility (Nested) analysis later produces Percent Repeatability & Reproducibility more than 7.7% and Percent Tolerance above 30%. Lastly, good and marginal Distinct Categories output are seen among the results. Despite acceptable performance of the measurement system in Distinct Categories, the poor results in accuracy, linearity, and Percent Repeatability & Reproducibility render the gage generally not capable. Improvement to the measurement system is imminent.

  15. An Evaluation of Nutrition Education Program for Low-Income Youth

    ERIC Educational Resources Information Center

    Kemirembe, Olive M. K.; Radhakrishna, Rama B.; Gurgevich, Elise; Yoder, Edgar P.; Ingram, Patreese D.

    2011-01-01

    A quasi-experimental design consisting of pretest, posttest, and delayed posttest comparison control group was used. Nutrition knowledge and behaviors were measured at pretest (time 1) posttest (time 2) and delayed posttest (time 3). General Linear Model (GLM) repeated measure ANCOVA results showed that youth who received nutrition education…

  16. QUEST+: A general multidimensional Bayesian adaptive psychometric method.

    PubMed

    Watson, Andrew B

    2017-03-01

    QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.

  17. Advances in diagnostic ultrasonography.

    PubMed

    Reef, V B

    1991-08-01

    A wide variety of ultrasonographic equipment currently is available for use in equine practice, but no one machine is optimal for every type of imaging. Image quality is the most important factor in equipment selection once the needs of the practitioner are ascertained. The transducer frequencies available, transducer footprints, depth of field displayed, frame rate, gray scale, simultaneous electrocardiography, Doppler, and functions to modify the image are all important considerations. The ability to make measurements off of videocassette recorder playback and future upgradability should be evaluated. Linear array and sector technology are the backbone of equine ultrasonography today. Linear array technology is most useful for a high-volume broodmare practice, whereas sector technology is ideal for a more general equine practice. The curved or convex linear scanner has more applications than the standard linear array and is equipped with the linear array rectal probe, which provides the equine practitioner with a more versatile unit for equine ultrasonographic evaluations. The annular array and phased array systems have improved image quality, but each has its own limitations. The new sector scanners still provide the most versatile affordable equipment for equine general practice.

  18. Nonlinear and linear wave equations for propagation in media with frequency power law losses

    NASA Astrophysics Data System (ADS)

    Szabo, Thomas L.

    2003-10-01

    The Burgers, KZK, and Westervelt wave equations used for simulating wave propagation in nonlinear media are based on absorption that has a quadratic dependence on frequency. Unfortunately, most lossy media, such as tissue, follow a more general frequency power law. The authors first research involved measurements of loss and dispersion associated with a modification to Blackstock's solution to the linear thermoviscous wave equation [J. Acoust. Soc. Am. 41, 1312 (1967)]. A second paper by Blackstock [J. Acoust. Soc. Am. 77, 2050 (1985)] showed the loss term in the Burgers equation for plane waves could be modified for other known instances of loss. The authors' work eventually led to comprehensive time-domain convolutional operators that accounted for both dispersion and general frequency power law absorption [Szabo, J. Acoust. Soc. Am. 96, 491 (1994)]. Versions of appropriate loss terms were developed to extend the standard three nonlinear wave equations to these more general losses. Extensive experimental data has verified the predicted phase velocity dispersion for different power exponents for the linear case. Other groups are now working on methods suitable for solving wave equations numerically for these types of loss directly in the time domain for both linear and nonlinear media.

  19. Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.

    PubMed

    Jiang, Yuan; He, Yunxiao; Zhang, Heping

    LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.

  20. Development of orientation tuning in simple cells of primary visual cortex

    PubMed Central

    Moore, Bartlett D.

    2012-01-01

    Orientation selectivity and its development are basic features of visual cortex. The original model of orientation selectivity proposes that elongated simple cell receptive fields are constructed from convergent input of an array of lateral geniculate nucleus neurons. However, orientation selectivity of simple cells in the visual cortex is generally greater than the linear contributions based on projections from spatial receptive field profiles. This implies that additional selectivity may arise from intracortical mechanisms. The hierarchical processing idea implies mainly linear connections, whereas cortical contributions are generally considered to be nonlinear. We have explored development of orientation selectivity in visual cortex with a focus on linear and nonlinear factors in a population of anesthetized 4-wk postnatal kittens and adult cats. Linear contributions are estimated from receptive field maps by which orientation tuning curves are generated and bandwidth is quantified. Nonlinear components are estimated as the magnitude of the power function relationship between responses measured from drifting sinusoidal gratings and those predicted from the spatial receptive field. Measured bandwidths for kittens are slightly larger than those in adults, whereas predicted bandwidths are substantially broader. These results suggest that relatively strong nonlinearities in early postnatal stages are substantially involved in the development of orientation tuning in visual cortex. PMID:22323631

  1. 40 CFR 1066.20 - Units of measure and overview of calculations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Applicability and General Provisions § 1066.20 Units of..., repeatability, linearity, or noise specification. See 40 CFR 1065.1001 for the definition of tolerance. In this...

  2. A flexible Bayesian assessment for the expected impact of data on prediction confidence for optimal sampling designs

    NASA Astrophysics Data System (ADS)

    Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang

    2010-05-01

    Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The applicability and advantages are shown in a synthetic example. Therefor, we consider a contaminant source, posing a threat on a drinking water well in an aquifer. Furthermore, we assume uncertainty in geostatistical parameters, boundary conditions and hydraulic gradient. The two mentioned measures evaluate the sensitivity of (1) general prediction confidence and (2) exceedance probability of a legal regulatory threshold value on sampling locations.

  3. Airborne ultrasound applied to anthropometry--physical and technical principles.

    PubMed

    Lindström, K; Mauritzson, L; Benoni, G; Willner, S

    1983-01-01

    Airborne ultrasound has been utilized for remote measurement of distance, direction, size, form, volume and velocity. General anthropometrical measurements are performed with a newly constructed real-time linear array scanner. To make full use of the method, we expect a rapid development of high-frequency ultrasound transducers for use in air.

  4. A General Multidimensional Model for the Measurement of Cultural Differences.

    ERIC Educational Resources Information Center

    Olmedo, Esteban L.; Martinez, Sergio R.

    A multidimensional model for measuring cultural differences (MCD) based on factor analytic theory and techniques is proposed. The model assumes that a cultural space may be defined by means of a relatively small number of orthogonal dimensions which are linear combinations of a much larger number of cultural variables. Once a suitable,…

  5. The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.

    ERIC Educational Resources Information Center

    Dunivant, Noel

    The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…

  6. Characterizing entanglement with global and marginal entropic measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Illuminati, Fabrizio; De Siena, Silvio

    2003-12-01

    We qualify the entanglement of arbitrary mixed states of bipartite quantum systems by comparing global and marginal mixednesses quantified by different entropic measures. For systems of two qubits we discriminate the class of maximally entangled states with fixed marginal mixednesses, and determine an analytical upper bound relating the entanglement of formation to the marginal linear entropies. This result partially generalizes to mixed states the quantification of entanglement with marginal mixednesses holding for pure states. We identify a class of entangled states that, for fixed marginals, are globally more mixed than product states when measured by the linear entropy. Such statesmore » cannot be discriminated by the majorization criterion.« less

  7. Cluster-state quantum computing enhanced by high-fidelity generalized measurements.

    PubMed

    Biggerstaff, D N; Kaltenbaek, R; Hamel, D R; Weihs, G; Rudolph, T; Resch, K J

    2009-12-11

    We introduce and implement a technique to extend the quantum computational power of cluster states by replacing some projective measurements with generalized quantum measurements (POVMs). As an experimental demonstration we fully realize an arbitrary three-qubit cluster computation by implementing a tunable linear-optical POVM, as well as fast active feedforward, on a two-qubit photonic cluster state. Over 206 different computations, the average output fidelity is 0.9832+/-0.0002; furthermore the error contribution from our POVM device and feedforward is only of O(10(-3)), less than some recent thresholds for fault-tolerant cluster computing.

  8. On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations

    NASA Technical Reports Server (NTRS)

    Tsao, Nai-Kuan

    1989-01-01

    A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.

  9. Using complexity metrics with R-R intervals and BPM heart rate measures.

    PubMed

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics-fractal (DFA) and recurrence (RQA) analyses-reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, "oversampled" BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics.

  10. Using complexity metrics with R-R intervals and BPM heart rate measures

    PubMed Central

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics—fractal (DFA) and recurrence (RQA) analyses—reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, “oversampled” BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics. PMID:23964244

  11. No-signaling quantum key distribution: solution by linear programming

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan

    2015-02-01

    We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.

  12. Las Matematicas: Lenguaje Universal. Nivel 3: La Medida (Mathematics: A Universal Language. Level 3: Measurement).

    ERIC Educational Resources Information Center

    Dissemination and Assessment Center for Bilingual Education, Austin, TX.

    This is one of a series of student booklets designed for use in a bilingual mathematics program in grades 6-8. The general format is to present each page in both Spanish and English. The mathematical topics in this booklet include liquid, dry, linear, weight, and time measures. (MK)

  13. Path integral measure and triangulation independence in discrete gravity

    NASA Astrophysics Data System (ADS)

    Dittrich, Bianca; Steinhaus, Sebastian

    2012-02-01

    A path integral measure for gravity should also preserve the fundamental symmetry of general relativity, which is diffeomorphism symmetry. In previous work, we argued that a successful implementation of this symmetry into discrete quantum gravity models would imply discretization independence. We therefore consider the requirement of triangulation independence for the measure in (linearized) Regge calculus, which is a discrete model for quantum gravity, appearing in the semi-classical limit of spin foam models. To this end we develop a technique to evaluate the linearized Regge action associated to Pachner moves in 3D and 4D and show that it has a simple, factorized structure. We succeed in finding a local measure for 3D (linearized) Regge calculus that leads to triangulation independence. This measure factor coincides with the asymptotics of the Ponzano Regge Model, a 3D spin foam model for gravity. We furthermore discuss to which extent one can find a triangulation independent measure for 4D Regge calculus and how such a measure would be related to a quantum model for 4D flat space. To this end, we also determine the dependence of classical Regge calculus on the choice of triangulation in 3D and 4D.

  14. Modelling female fertility traits in beef cattle using linear and non-linear models.

    PubMed

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  < 0.08 and r < 0.13, for linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  15. Low-rank regularization for learning gene expression programs.

    PubMed

    Ye, Guibo; Tang, Mengfan; Cai, Jian-Feng; Nie, Qing; Xie, Xiaohui

    2013-01-01

    Learning gene expression programs directly from a set of observations is challenging due to the complexity of gene regulation, high noise of experimental measurements, and insufficient number of experimental measurements. Imposing additional constraints with strong and biologically motivated regularizations is critical in developing reliable and effective algorithms for inferring gene expression programs. Here we propose a new form of regulation that constrains the number of independent connectivity patterns between regulators and targets, motivated by the modular design of gene regulatory programs and the belief that the total number of independent regulatory modules should be small. We formulate a multi-target linear regression framework to incorporate this type of regulation, in which the number of independent connectivity patterns is expressed as the rank of the connectivity matrix between regulators and targets. We then generalize the linear framework to nonlinear cases, and prove that the generalized low-rank regularization model is still convex. Efficient algorithms are derived to solve both the linear and nonlinear low-rank regularized problems. Finally, we test the algorithms on three gene expression datasets, and show that the low-rank regularization improves the accuracy of gene expression prediction in these three datasets.

  16. General relationships between ultrasonic attenuation and dispersion

    NASA Technical Reports Server (NTRS)

    Odonnell, M.; Jaynes, E. T.; Miller, J. G.

    1978-01-01

    General relationships between the ultrasonic attenuation and dispersion are presented. The validity of these nonlocal relationships hinges only on the properties of causality and linearity, and does not depend upon details of the mechanism responsible for the attenuation and dispersion. Approximate, nearly local relationships are presented and are demonstrated to predict accurately the ultrasonic dispersion in solutions of hemoglobin from the results of attenuation measurements.

  17. Perceptual distortion analysis of color image VQ-based coding

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  18. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    NASA Astrophysics Data System (ADS)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  19. Towards the Fundamental Quantum Limit of Linear Measurements of Classical Signals

    NASA Astrophysics Data System (ADS)

    Miao, Haixing; Adhikari, Rana X.; Ma, Yiqiu; Pang, Belinda; Chen, Yanbei

    2017-08-01

    The quantum Cramér-Rao bound (QCRB) sets a fundamental limit for the measurement of classical signals with detectors operating in the quantum regime. Using linear-response theory and the Heisenberg uncertainty relation, we derive a general condition for achieving such a fundamental limit. When applied to classical displacement measurements with a test mass, this condition leads to an explicit connection between the QCRB and the standard quantum limit that arises from a tradeoff between the measurement imprecision and quantum backaction; the QCRB can be viewed as an outcome of a quantum nondemolition measurement with the backaction evaded. Additionally, we show that the test mass is more a resource for improving measurement sensitivity than a victim of the quantum backaction, which suggests a new approach to enhancing the sensitivity of a broad class of sensors. We illustrate these points with laser interferometric gravitational-wave detectors.

  20. Are non-linearity effects of absorption important for MAX-DOAS observations?

    NASA Astrophysics Data System (ADS)

    Pukite, Janis; Wang, Yang; Wagner, Thomas

    2017-04-01

    For scattered light observations the absorption optical depth depends non-linearly on the trace gas concentrations if their absorption is strong. This is the case because the Beer-Lambert law is generally not applicable for scattered light measurements due to many (i.e. more than one) light paths contributing to the measurement. While in many cases a linear approximation can be made, for scenarios with strong absorption non-linear effects cannot always be neglected. This is especially the case for observation geometries with spatially extended and diffuse light paths, especially in satellite limb geometry but also for nadir measurements as well. Fortunately the effects of non-linear effects can be quantified by means of expanding the radiative transfer equation in a Taylor series with respect to the trace gas absorption coefficients. Herewith if necessary (1) the higher order absorption structures can be described as separate fit parameters in the DOAS fit and (2) the algorithm constraints of retrievals of VCDs and profiles can be improved by considering higher order sensitivity parameters. In this study we investigate the contribution of the higher order absorption structures for MAX-DOAS observation geometry for different atmospheric and ground properties (cloud and aerosol effects, trace gas amount, albedo) and geometry (different Sun and viewing angles).

  1. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  2. Field measurements of the linear and nonlinear shear moduli of cemented alluvium using dynamically loaded surface footings

    NASA Astrophysics Data System (ADS)

    Park, Kwangsoo

    In this dissertation, a research effort aimed at development and implementation of a direct field test method to evaluate the linear and nonlinear shear modulus of soil is presented. The field method utilizes a surface footing that is dynamically loaded horizontally. The test procedure involves applying static and dynamic loads to the surface footing and measuring the soil response beneath the loaded area using embedded geophones. A wide range in dynamic loads under a constant static load permits measurements of linear and nonlinear shear wave propagation from which shear moduli and associated shearing strains are evaluated. Shear wave velocities in the linear and nonlinear strain ranges are calculated from time delays in waveforms monitored by geophone pairs. Shear moduli are then obtained using the shear wave velocities and the mass density of a soil. Shear strains are determined using particle displacements calculated from particle velocities measured at the geophones by assuming a linear variation between geophone pairs. The field test method was validated by conducting an initial field experiment at sandy site in Austin, Texas. Then, field experiments were performed on cemented alluvium, a complex, hard-to-sample material. Three separate locations at Yucca Mountain, Nevada were tested. The tests successfully measured: (1) the effect of confining pressure on shear and compression moduli in the linear strain range and (2) the effect of strain on shear moduli at various states of stress in the field. The field measurements were first compared with empirical relationships for uncemented gravel. This comparison showed that the alluvium was clearly cemented. The field measurements were then compared to other independent measurements including laboratory resonant column tests and field seismic tests using the spectral-analysis-of-surface-waves method. The results from the field tests were generally in good agreement with the other independent test results, indicating that the proposed method has the ability to directly evaluate complex material like cemented alluvium in the field.

  3. Fitting a Point Cloud to a 3d Polyhedral Surface

    NASA Astrophysics Data System (ADS)

    Popov, E. V.; Rotkov, S. I.

    2017-05-01

    The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.

  4. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  5. A Comparison of Two Approaches for Measuring Educational Growth from CTBS and P-ACT+ Scores.

    ERIC Educational Resources Information Center

    Noble, Julie; Sawyer, Richard

    The purpose of the study was to compare two regression-based approaches for measuring educational effectiveness in Tennessee high schools: the mean residual approach (MR), and a more general linear models (LM) approach. Data were obtained from a sample of 1,011 students who were enrolled in 48 high schools, and who had taken the Comprehensive…

  6. Building out a Measurement Model to Incorporate Complexities of Testing in the Language Domain

    ERIC Educational Resources Information Center

    Wilson, Mark; Moore, Stephen

    2011-01-01

    This paper provides a summary of a novel and integrated way to think about the item response models (most often used in measurement applications in social science areas such as psychology, education, and especially testing of various kinds) from the viewpoint of the statistical theory of generalized linear and nonlinear mixed models. In addition,…

  7. Real-time imaging of human brain function by near-infrared spectroscopy using an adaptive general linear model

    PubMed Central

    Abdelnour, A. Farras; Huppert, Theodore

    2009-01-01

    Near-infrared spectroscopy is a non-invasive neuroimaging method which uses light to measure changes in cerebral blood oxygenation associated with brain activity. In this work, we demonstrate the ability to record and analyze images of brain activity in real-time using a 16-channel continuous wave optical NIRS system. We propose a novel real-time analysis framework using an adaptive Kalman filter and a state–space model based on a canonical general linear model of brain activity. We show that our adaptive model has the ability to estimate single-trial brain activity events as we apply this method to track and classify experimental data acquired during an alternating bilateral self-paced finger tapping task. PMID:19457389

  8. Cooperation without culture? The null effect of generalized trust on intentional homicide: a cross-national panel analysis, 1995-2009.

    PubMed

    Robbins, Blaine

    2013-01-01

    Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation.

  9. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    NASA Astrophysics Data System (ADS)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  10. On summary measure analysis of linear trend repeated measures data: performance comparison with two competing methods.

    PubMed

    Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh

    2012-03-22

    The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.

  11. Measured and predicted structural behavior of the HiMAT tailored composite wing

    NASA Technical Reports Server (NTRS)

    Nelson, Lawrence H.

    1987-01-01

    A series of load tests was conducted on the HiMAT tailored composite wing. Coupon tests were also run on a series of unbalanced laminates, including the ply configuration of the wing, the purpose of which was to compare the measured and predicted behavior of unbalanced laminates, including - in the case of the wing - a comparison between the behavior of the full scale structure and coupon tests. Both linear and nonlinear finite element (NASTRAN) analyses were carried out on the wing. Both linear and nonlinear point-stress analyses were performed on the coupons. All test articles were instrumented with strain gages, and wing deflections measured. The leading and trailing edges were found to have no effect on the response of the wing to applied loads. A decrease in the stiffness of the wing box was evident over the 27-test program. The measured load-strain behavior of the wing was found to be linear, in contrast to coupon tests of the same laminate, which were nonlinear. A linear NASTRAN analysis of the wing generally correlated more favorably with measurements than did a nonlinear analysis. An examination of the predicted deflections in the wing root region revealed an anomalous behavior of the structural model that cannot be explained. Both hysteresis and creep appear to be less significant in the wing tests than in the corresponding laminate coupon tests.

  12. Methodological quality and reporting of generalized linear mixed models in clinical medicine (2000-2012): a systematic review.

    PubMed

    Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L

    2014-01-01

    Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of reporting has room for improvement regarding the characteristics of the analysis, estimation method, validation, and selection of the model.

  13. Developmental Change in the Influence of Domain-General Abilities and Domain-Specific Knowledge on Mathematics Achievement: An Eight-Year Longitudinal Study

    PubMed Central

    Geary, David C.; Nicholas, Alan; Li, Yaoran; Sun, Jianguo

    2016-01-01

    The contributions of domain-general abilities and domain-specific knowledge to subsequent mathematics achievement were longitudinally assessed (n = 167) through 8th grade. First grade intelligence and working memory and prior grade reading achievement indexed domain-general effects and domain-specific effects were indexed by prior grade mathematics achievement and mathematical cognition measures of prior grade number knowledge, addition skills, and fraction knowledge. Use of functional data analysis enabled grade-by-grade estimation of overall domain-general and domain-specific effects on subsequent mathematics achievement, the relative importance of individual domain-general and domain-specific variables on this achievement, and linear and non-linear across-grade estimates of these effects. The overall importance of domain-general abilities for subsequent achievement was stable across grades, with working memory emerging as the most important domain-general ability in later grades. The importance of prior mathematical competencies on subsequent mathematics achievement increased across grades, with number knowledge and arithmetic skills critical in all grades and fraction knowledge in later grades. Overall, domain-general abilities were more important than domain-specific knowledge for mathematics learning in early grades but general abilities and domain-specific knowledge were equally important in later grades. PMID:28781382

  14. 40 CFR 53.1 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... followed by a gravimetric mass determination, but which is not a Class I equivalent method because of... MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.1 Definitions. Terms used but not defined... slope of a linear plot fitted to corresponding candidate and reference method mean measurement data...

  15. Contextual Fraction as a Measure of Contextuality.

    PubMed

    Abramsky, Samson; Barbosa, Rui Soares; Mansfield, Shane

    2017-08-04

    We consider the contextual fraction as a quantitative measure of contextuality of empirical models, i.e., tables of probabilities of measurement outcomes in an experimental scenario. It provides a general way to compare the degree of contextuality across measurement scenarios; it bears a precise relationship to violations of Bell inequalities; its value, and a witnessing inequality, can be computed using linear programing; it is monotonic with respect to the "free" operations of a resource theory for contextuality; and it measures quantifiable advantages in informatic tasks, such as games and a form of measurement-based quantum computing.

  16. Contextual Fraction as a Measure of Contextuality

    NASA Astrophysics Data System (ADS)

    Abramsky, Samson; Barbosa, Rui Soares; Mansfield, Shane

    2017-08-01

    We consider the contextual fraction as a quantitative measure of contextuality of empirical models, i.e., tables of probabilities of measurement outcomes in an experimental scenario. It provides a general way to compare the degree of contextuality across measurement scenarios; it bears a precise relationship to violations of Bell inequalities; its value, and a witnessing inequality, can be computed using linear programing; it is monotonic with respect to the "free" operations of a resource theory for contextuality; and it measures quantifiable advantages in informatic tasks, such as games and a form of measurement-based quantum computing.

  17. Direct measurement of nonlinear dispersion relation for water surface waves

    NASA Astrophysics Data System (ADS)

    Magnus Arnesen Taklo, Tore; Trulsen, Karsten; Elias Krogstad, Harald; Gramstad, Odin; Nieto Borge, José Carlos; Jensen, Atle

    2013-04-01

    The linear dispersion relation for water surface waves is often taken for granted for the interpretation of wave measurements. High-resolution spatiotemporal measurements suitable for direct validation of the linear dispersion relation are on the other hand rarely available. While the imaging of the ocean surface with nautical radar does provide the desired spatiotemporal coverage, the interpretation of the radar images currently depends on the linear dispersion relation as a prerequisite, (Nieto Borge et al., 2004). Krogstad & Trulsen (2010) carried out numerical simulations with the nonlinear Schrödinger equation and its generalizations demonstrating that the nonlinear evolution of wave fields may render the linear dispersion relation inadequate for proper interpretation of observations, the reason being that the necessary domain of simultaneous coverage in space and time would allow significant nonlinear evolution. They found that components above the spectral peak can have larger phase and group velocities than anticipated by linear theory, and that the spectrum does not maintain a thin dispersion surface. We have run laboratory experiments and accurate numerical simulations designed to have sufficient resolution in space and time to deduce the dispersion relation directly. For a JONSWAP spectrum we find that the linear dispersion relation can be appropriate for the interpretation of spatiotemporal measurements. For a Gaussian spectrum with narrower bandwidth we find that the dynamic nonlinear evolution in space and time causes the directly measured dispersion relation to deviate from the linear dispersion surface in good agreement with our previous numerical predictions. This work has been supported by RCN grant 214556/F20. Krogstad, H. E. & Trulsen, K. (2010) Interpretations and observations of ocean wave spectra. Ocean Dynamics 60:973-991. Nieto Borge, J. C., Rodríguez, G., Hessner, K., Izquierdo, P. (2004) Inversion of marine radar images for surface wave analysis. J. Atmos. Ocean. Tech. 21:1291-1300.

  18. Quantification and parametrization of non-linearity effects by higher-order sensitivity terms in scattered light differential optical absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Puķīte, Jānis; Wagner, Thomas

    2016-05-01

    We address the application of differential optical absorption spectroscopy (DOAS) of scattered light observations in the presence of strong absorbers (in particular ozone), for which the absorption optical depth is a non-linear function of the trace gas concentration. This is the case because Beer-Lambert law generally does not hold for scattered light measurements due to many light paths contributing to the measurement. While in many cases linear approximation can be made, for scenarios with strong absorptions non-linear effects cannot always be neglected. This is especially the case for observation geometries, for which the light contributing to the measurement is crossing the atmosphere under spatially well-separated paths differing strongly in length and location, like in limb geometry. In these cases, often full retrieval algorithms are applied to address the non-linearities, requiring iterative forward modelling of absorption spectra involving time-consuming wavelength-by-wavelength radiative transfer modelling. In this study, we propose to describe the non-linear effects by additional sensitivity parameters that can be used e.g. to build up a lookup table. Together with widely used box air mass factors (effective light paths) describing the linear response to the increase in the trace gas amount, the higher-order sensitivity parameters eliminate the need for repeating the radiative transfer modelling when modifying the absorption scenario even in the presence of a strong absorption background. While the higher-order absorption structures can be described as separate fit parameters in the spectral analysis (so-called DOAS fit), in practice their quantitative evaluation requires good measurement quality (typically better than that available from current measurements). Therefore, we introduce an iterative retrieval algorithm correcting for the higher-order absorption structures not yet considered in the DOAS fit as well as the absorption dependence on temperature and scattering processes.

  19. Las Matematicas: Lenguaje Universal. Grados Intermedios, Nivel 5b: Medida Lineal, Perimetro y Area (Mathematics: A Universal Language. Intermediate Grades, Level 5b: Linear Measure, Perimeter and Area).

    ERIC Educational Resources Information Center

    Dissemination and Assessment Center for Bilingual Education, Austin, TX.

    This is one of a series of student booklets designed for use in a bilingual mathematics program in grades 6-8. The general format is to present each page in both Spanish and English. The mathematical topics in this booklet include measurement, perimeter, and area. (MK)

  20. A generalized fuzzy linear programming approach for environmental management problem under uncertainty.

    PubMed

    Fan, Yurui; Huang, Guohe; Veawab, Amornvadee

    2012-01-01

    In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.

  1. Control design for robust stability in linear regulators: Application to aerospace flight control

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1986-01-01

    Time domain stability robustness analysis and design for linear multivariable uncertain systems with bounded uncertainties is the central theme of the research. After reviewing the recently developed upper bounds on the linear elemental (structured), time varying perturbation of an asymptotically stable linear time invariant regulator, it is shown that it is possible to further improve these bounds by employing state transformations. Then introducing a quantitative measure called the stability robustness index, a state feedback conrol design algorithm is presented for a general linear regulator problem and then specialized to the case of modal systems as well as matched systems. The extension of the algorithm to stochastic systems with Kalman filter as the state estimator is presented. Finally an algorithm for robust dynamic compensator design is presented using Parameter Optimization (PO) procedure. Applications in a aircraft control and flexible structure control are presented along with a comparison with other existing methods.

  2. Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations

    NASA Astrophysics Data System (ADS)

    Wyszkowska, Patrycja

    2017-12-01

    The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.

  3. Bring the Pythagorean Theorem "Full Circle"

    ERIC Educational Resources Information Center

    Benson, Christine C.; Malm, Cheryl G.

    2011-01-01

    Middle school mathematics generally explores applications of the Pythagorean theorem and lays the foundation for working with linear equations. The Grade 8 Curriculum Focal Points recommend that students "apply the Pythagorean theorem to find distances between points in the Cartesian coordinate plane to measure lengths and analyze polygons and…

  4. The ultrasound-enhanced bioscouring performance of four polygalacturonase enzymes obtained from rhizopus oryzae

    USDA-ARS?s Scientific Manuscript database

    An analytical and statistical method has been developed to measure the ultrasound-enhanced bioscouring performance of milligram quantities of endo- and exo-polygalacturonase enzymes obtained from Rhizopus oryzae fungi. UV-Vis spectrophotometric data and a general linear mixed models procedure indic...

  5. Bone and cartilage characteristics in postmenopausal women with mild knee radiographic osteoarthritis and those without radiographic osteoarthritis

    PubMed Central

    Multanen, J.; Heinonen, A.; Häkkinen, A.; Kautiainen, H.; Kujala, U.M.; Lammentausta, E.; Jämsä, T.; Kiviranta, I.; Nieminen, M.T.

    2015-01-01

    Objectives: To evaluate the association between radiographically-assessed knee osteoarthritis and femoral neck bone characteristics in women with mild knee radiographic osteoarthritis and those without radiographic osteoarthritis. Methods: Ninety postmenopausal women (mean age [SD], 58 [4] years; height, 163 [6] cm; weight, 71 [11] kg) participated in this cross-sectional study. The severity of radiographic knee osteoarthritis was defined using Kellgren-Lawrence grades 0=normal (n=12), 1=doubtful (n=25) or 2=minimal (n=53). Femoral neck bone mineral content (BMC), section modulus (Z), and cross-sectional area (CSA) were measured with DXA. The biochemical composition of ipsilateral knee cartilage was estimated using quantitative MRI measures, T2 mapping and dGEMRIC. The associations between radiographic knee osteoarthritis grades and bone and cartilage characteristics were analyzed using generalized linear models. Results: Age-, height-, and weight-adjusted femoral neck BMC (p for linearity=0.019), Z (p for linearity=0.033), and CSA (p for linearity=0.019) increased significantly with higher knee osteoarthritis grades. There was no linear relationship between osteoarthritis grades and knee cartilage indices. Conclusions: Increased DXA assessed hip bone strength is related to knee osteoarthritis severity. These results are hypothesis driven that there is an inverse relationship between osteoarthritis and osteoporosis. However, MRI assessed measures of cartilage do not discriminate mild radiographic osteoarthritis severity. PMID:25730654

  6. Non-contact measurement of helicopter device position in wind tunnels with the use of optical videogrammetry method

    NASA Astrophysics Data System (ADS)

    Kuruliuk, K. A.; Kulesh, V. P.

    2016-10-01

    An optical videogrammetry method using one digital camera for non-contact measurements of geometric shape parameters, position and motion of models and structural elements of aircraft in experimental aerodynamics was developed. The tests with the use of this method for measurement of six components (three linear and three angular ones) of real position of helicopter device in wind tunnel flow were conducted. The distance between camera and test object was 15 meters. It was shown in practice that, in the conditions of aerodynamic experiment instrumental measurement error (standard deviation) for angular and linear displacements of helicopter device does not exceed 0,02° and 0.3 mm, respectively. Analysis of the results shows that at the minimum rotor thrust deviations are systematic and generally are within ± 0.2 degrees. Deviations of angle values grow with the increase of rotor thrust.

  7. Monte Carlo simulation for Neptun 10 PC medical linear accelerator and calculations of output factor for electron beam

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Momennezhad, Mehdi; Hashemi, Seyed Mohammad

    2012-01-01

    Aim Exact knowledge of dosimetric parameters is an essential pre-requisite of an effective treatment in radiotherapy. In order to fulfill this consideration, different techniques have been used, one of which is Monte Carlo simulation. Materials and methods This study used the MCNP-4Cb to simulate electron beams from Neptun 10 PC medical linear accelerator. Output factors for 6, 8 and 10 MeV electrons applied to eleven different conventional fields were both measured and calculated. Results The measurements were carried out by a Wellhofler-Scanditronix dose scanning system. Our findings revealed that output factors acquired by MCNP-4C simulation and the corresponding values obtained by direct measurements are in a very good agreement. Conclusion In general, very good consistency of simulated and measured results is a good proof that the goal of this work has been accomplished. PMID:24377010

  8. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    PubMed

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  9. Modeling the frequency of opposing left-turn conflicts at signalized intersections using generalized linear regression models.

    PubMed

    Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei

    2014-01-01

    The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.

  10. Mode Identification of High-Amplitude Pressure Waves in Liquid Rocket Engines

    NASA Astrophysics Data System (ADS)

    EBRAHIMI, R.; MAZAHERI, K.; GHAFOURIAN, A.

    2000-01-01

    Identification of existing instability modes from experimental pressure measurements of rocket engines is difficult, specially when steep waves are present. Actual pressure waves are often non-linear and include steep shocks followed by gradual expansions. It is generally believed that interaction of these non-linear waves is difficult to analyze. A method of mode identification is introduced. After presumption of constituent modes, they are superposed by using a standard finite difference scheme for solution of the classical wave equation. Waves are numerically produced at each end of the combustion tube with different wavelengths, amplitudes, and phases with respect to each other. Pressure amplitude histories and phase diagrams along the tube are computed. To determine the validity of the presented method for steep non-linear waves, the Euler equations are numerically solved for non-linear waves, and negligible interactions between these waves are observed. To show the applicability of this method, other's experimental results in which modes were identified are used. Results indicate that this simple method can be used in analyzing complicated pressure signal measurements.

  11. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  12. Cooperation without Culture? The Null Effect of Generalized Trust on Intentional Homicide: A Cross-National Panel Analysis, 1995–2009

    PubMed Central

    Robbins, Blaine

    2013-01-01

    Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation. PMID:23527211

  13. Measurement Matrix Design for Phase Retrieval Based on Mutual Information

    NASA Astrophysics Data System (ADS)

    Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.

    2018-01-01

    In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.

  14. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  15. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  16. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  17. An overview of longitudinal data analysis methods for neurological research.

    PubMed

    Locascio, Joseph J; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.

  18. Curriculum-Based Measurement of Oral Reading: An Evaluation of Growth Rates and Seasonal Effects among Students Served in General and Special Education

    ERIC Educational Resources Information Center

    Christ, Theodore J.; Silberglitt, Benjamin; Yeo, Seungsoo; Cormier, Damien

    2010-01-01

    Curriculum-based measurement of oral reading (CBM-R) is often used to benchmark growth in the fall, winter, and spring. CBM-R is also used to set goals and monitor student progress between benchmarking occasions. The results of previous research establish an expectation that weekly growth on CBM-R tasks is consistently linear throughout the…

  19. The Grassmannian Atlas: A General Framework for Exploring Linear Projections of High-Dimensional Data

    DOE PAGES

    Liu, S.; Bremer, P. -T; Jayaraman, J. J.; ...

    2016-06-04

    Linear projections are one of the most common approaches to visualize high-dimensional data. Since the space of possible projections is large, existing systems usually select a small set of interesting projections by ranking a large set of candidate projections based on a chosen quality measure. However, while highly ranked projections can be informative, some lower ranked ones could offer important complementary information. Therefore, selection based on ranking may miss projections that are important to provide a global picture of the data. Here, the proposed work fills this gap by presenting the Grassmannian Atlas, a framework that captures the global structuresmore » of quality measures in the space of all projections, which enables a systematic exploration of many complementary projections and provides new insights into the properties of existing quality measures.« less

  20. Analysis of Parasite and Other Skewed Counts

    PubMed Central

    Alexander, Neal

    2012-01-01

    Objective To review methods for the statistical analysis of parasite and other skewed count data. Methods Statistical methods for skewed count data are described and compared, with reference to those used over a ten year period of Tropical Medicine and International Health. Two parasitological datasets are used for illustration. Results Ninety papers were identified, 89 with descriptive and 60 with inferential analysis. A lack of clarity is noted in identifying measures of location, in particular the Williams and geometric mean. The different measures are compared, emphasizing the legitimacy of the arithmetic mean for skewed data. In the published papers, the t test and related methods were often used on untransformed data, which is likely to be invalid. Several approaches to inferential analysis are described, emphasizing 1) non-parametric methods, while noting that they are not simply comparisons of medians, and 2) generalized linear modelling, in particular with the negative binomial distribution. Additional methods, such as the bootstrap, with potential for greater use are described. Conclusions Clarity is recommended when describing transformations and measures of location. It is suggested that non-parametric methods and generalized linear models are likely to be sufficient for most analyses. PMID:22943299

  1. Generalized susceptibilities and Landau parameters for anisotropic Fermi liquids

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ponte, P.; Cabra, D.; Grandi, N.

    2015-05-01

    We study Fermi liquids (FLs) with a Fermi surface that lacks continuous rotational invariance and in the presence of an arbitrary quartic interaction. We obtain the expressions of the generalized static susceptibilities that measure the linear response of a generic order parameter to a perturbation of the Hamiltonian. We apply our formulae to the spin and charge susceptibilities. Based on the resulting expressions, we make a proposal for the definition of the Landau parameters in nonisotropic FL.

  2. Inference of directed climate networks: role of instability of causality estimation methods

    NASA Astrophysics Data System (ADS)

    Hlinka, Jaroslav; Hartman, David; Vejmelka, Martin; Paluš, Milan

    2013-04-01

    Climate data are increasingly analyzed by complex network analysis methods, including graph-theoretical approaches [1]. For such analysis, links between localized nodes of climate network are typically quantified by some statistical measures of dependence (connectivity) between measured variables of interest. To obtain information on the directionality of the interactions in the networks, a wide range of methods exists. These can be broadly divided into linear and nonlinear methods, with some of the latter having the theoretical advantage of being model-free, and principally a generalization of the former [2]. However, as a trade-off, this generality comes together with lower accuracy - in particular if the system was close to linear. In an overall stationary system, this may potentially lead to higher variability in the nonlinear network estimates. Therefore, with the same control of false alarms, this may lead to lower sensitivity for detection of real changes in the network structure. These problems are discussed on the example of daily SAT and SLP data from the NCEP/NCAR reanalysis dataset. We first reduce the dimensionality of data using PCA with VARIMAX rotation to detect several dozens of components that together explain most of the data variability. We further construct directed climate networks applying a selection of most widely used methods - variants of linear Granger causality and conditional mutual information. Finally, we assess the stability of the detected directed climate networks by computing them in sliding time windows. To understand the origin of the observed instabilities and their range, we also apply the same procedure to two types of surrogate data: either with non-stationarity in network structure removed, or imposed in a controlled way. In general, the linear methods show stable results in terms of overall similarity of directed climate networks inferred. For instance, for different decades of SAT data, the Spearman correlation of edge weights in the networks is ~ 0.6. The networks constructed using nonlinear measures were in general less stable both in real data and stationarized surrogates. Interestingly, when the nonlinear method parameters are optimized with respect to temporal stability of the networks, the networks seem to converge close to those detected by linear Granger causality. This provides further evidence for the hypothesis of overall sparsity and weakness of nonlinear coupling in climate networks on this spatial and temporal scale [3] and sufficient support for the use of linear methods in this context, unless specific clearly detectable nonlinear phenomena are targeted. Acknowledgement: This study is supported by the Czech Science Foundation, Project No. P103/11/J068. [1] Boccaletti, S.; Latora, V.; Moreno, Y.; Chavez, M. & Hwang, D. U.: Complex networks: Structure and dynamics, Physics Reports, 2006, 424, 175-308 [2] Barnett, L.; Barrett, A. B. & Seth, A. K.: Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables, Physical Review Letters, 2009, 103, 238701 [3] Hlinka, J.; Hartman, D.; Vejmelka, M.; Novotná, D.; Paluš, M.: Non-linear dependence and teleconnections in climate data: sources, relevance, nonstationarity, submitted preprint (http://arxiv.org/abs/1211.6688)

  3. Collective effect of personal behavior induced preventive measures and differential rate of transmission on spread of epidemics

    NASA Astrophysics Data System (ADS)

    Sagar, Vikram; Zhao, Yi

    2017-02-01

    In the present work, the effect of personal behavior induced preventive measures is studied on the spread of epidemics over scale free networks that are characterized by the differential rate of disease transmission. The role of personal behavior induced preventive measures is parameterized in terms of variable λ, which modulates the number of concurrent contacts a node makes with the fraction of its neighboring nodes. The dynamics of the disease is described by a non-linear Susceptible Infected Susceptible model based upon the discrete time Markov Chain method. The network mean field approach is generalized to account for the effect of non-linear coupling between the aforementioned factors on the collective dynamics of nodes. The upper bound estimates of the disease outbreak threshold obtained from the mean field theory are found to be in good agreement with the corresponding non-linear stochastic model. From the results of parametric study, it is shown that the epidemic size has inverse dependence on the preventive measures (λ). It has also been shown that the increase in the average degree of the nodes lowers the time of spread and enhances the size of epidemics.

  4. Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.

    ERIC Educational Resources Information Center

    Brant, Rollin

    Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…

  5. Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties

    PubMed Central

    Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon

    2014-01-01

    Purpose The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency during anterior-posterior stretching. Method Three materially linear and three materially nonlinear models were created and stretched up to 10 mm in 1 mm increments. Phonation onset pressure (Pon) and fundamental frequency (F0) at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1 mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Results Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Conclusions Nonlinear synthetic models appear to more accurately represent the human vocal folds than linear models, especially with respect to F0 response. PMID:22271874

  6. Frequency response of synthetic vocal fold models with linear and nonlinear material properties.

    PubMed

    Shaw, Stephanie M; Thomson, Scott L; Dromey, Christopher; Smith, Simeon

    2012-10-01

    The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F0) during anterior-posterior stretching. Three materially linear and 3 materially nonlinear models were created and stretched up to 10 mm in 1-mm increments. Phonation onset pressure (Pon) and F0 at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1-mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Nonlinear synthetic models appear to more accurately represent the human vocal folds than do linear models, especially with respect to F0 response.

  7. Non-Linear Optimization Applied to Angle-of-Arrival Satellite-Based Geolocation with Correlated Measurements

    DTIC Science & Technology

    2015-03-01

    General covariance intersection covariance matrix Σ1 Measurement 1’s covariance matrix I(X) Fisher information matrix g Confidence region L Lower... information in this chapter will discuss the motivation and background of the geolocation algorithm with the scope of the applications for this research. The...algorithm is able to produce the best description of an object given the information from a set of measurements. Determining a position requires the use of a

  8. Phase space flows for non-Hamiltonian systems with constraints

    NASA Astrophysics Data System (ADS)

    Sergi, Alessandro

    2005-09-01

    In this paper, non-Hamiltonian systems with holonomic constraints are treated by a generalization of Dirac’s formalism. Non-Hamiltonian phase space flows can be described by generalized antisymmetric brackets or by general Liouville operators which cannot be derived from brackets. Both situations are treated. In the first case, a Nosé-Dirac bracket is introduced as an example. In the second one, Dirac’s recipe for projecting out constrained variables from time translation operators is generalized and then applied to non-Hamiltonian linear response. Dirac’s formalism avoids spurious terms in the response function of constrained systems. However, corrections coming from phase space measure must be considered for general perturbations.

  9. General implementation of arbitrary nonlinear quadrature phase gates

    NASA Astrophysics Data System (ADS)

    Marek, Petr; Filip, Radim; Ogawa, Hisashi; Sakaguchi, Atsushi; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira

    2018-02-01

    We propose general methodology of deterministic single-mode quantum interaction nonlinearly modifying single quadrature variable of a continuous-variable system. The methodology is based on linear coupling of the system to ancillary systems subsequently measured by quadrature detectors. The nonlinear interaction is obtained by using the data from the quadrature detection for dynamical manipulation of the coupling parameters. This measurement-induced methodology enables direct realization of arbitrary nonlinear quadrature interactions without the need to construct them from the lowest-order gates. Such nonlinear interactions are crucial for more practical and efficient manipulation of continuous quadrature variables as well as qubits encoded in continuous-variable systems.

  10. General method for extracting the quantum efficiency of dispersive qubit readout in circuit QED

    NASA Astrophysics Data System (ADS)

    Bultink, C. C.; Tarasinski, B.; Haandbæk, N.; Poletto, S.; Haider, N.; Michalak, D. J.; Bruno, A.; DiCarlo, L.

    2018-02-01

    We present and demonstrate a general three-step method for extracting the quantum efficiency of dispersive qubit readout in circuit QED. We use active depletion of post-measurement photons and optimal integration weight functions on two quadratures to maximize the signal-to-noise ratio of the non-steady-state homodyne measurement. We derive analytically and demonstrate experimentally that the method robustly extracts the quantum efficiency for arbitrary readout conditions in the linear regime. We use the proven method to optimally bias a Josephson traveling-wave parametric amplifier and to quantify different noise contributions in the readout amplification chain.

  11. Development of non-linear models predicting daily fine particle concentrations using aerosol optical depth retrievals and ground-based measurements at a municipality in the Brazilian Amazon region

    NASA Astrophysics Data System (ADS)

    Gonçalves, Karen dos Santos; Winkler, Mirko S.; Benchimol-Barbosa, Paulo Roberto; de Hoogh, Kees; Artaxo, Paulo Eduardo; de Souza Hacon, Sandra; Schindler, Christian; Künzli, Nino

    2018-07-01

    Epidemiological studies generally use particulate matter measurements with diameter less 2.5 μm (PM2.5) from monitoring networks. Satellite aerosol optical depth (AOD) data has considerable potential in predicting PM2.5 concentrations, and thus provides an alternative method for producing knowledge regarding the level of pollution and its health impact in areas where no ground PM2.5 measurements are available. This is the case in the Brazilian Amazon rainforest region where forest fires are frequent sources of high pollution. In this study, we applied a non-linear model for predicting PM2.5 concentration from AOD retrievals using interaction terms between average temperature, relative humidity, sine, cosine of date in a period of 365,25 days and the square of the lagged relative residual. Regression performance statistics were tested comparing the goodness of fit and R2 based on results from linear regression and non-linear regression for six different models. The regression results for non-linear prediction showed the best performance, explaining on average 82% of the daily PM2.5 concentrations when considering the whole period studied. In the context of Amazonia, it was the first study predicting PM2.5 concentrations using the latest high-resolution AOD products also in combination with the testing of a non-linear model performance. Our results permitted a reliable prediction considering the AOD-PM2.5 relationship and set the basis for further investigations on air pollution impacts in the complex context of Brazilian Amazon Region.

  12. The application of information theory for the research of aging and aging-related diseases.

    PubMed

    Blokh, David; Stambler, Ilia

    2017-10-01

    This article reviews the application of information-theoretical analysis, employing measures of entropy and mutual information, for the study of aging and aging-related diseases. The research of aging and aging-related diseases is particularly suitable for the application of information theory methods, as aging processes and related diseases are multi-parametric, with continuous parameters coexisting alongside discrete parameters, and with the relations between the parameters being as a rule non-linear. Information theory provides unique analytical capabilities for the solution of such problems, with unique advantages over common linear biostatistics. Among the age-related diseases, information theory has been used in the study of neurodegenerative diseases (particularly using EEG time series for diagnosis and prediction), cancer (particularly for establishing individual and combined cancer biomarkers), diabetes (mainly utilizing mutual information to characterize the diseased and aging states), and heart disease (mainly for the analysis of heart rate variability). Few works have employed information theory for the analysis of general aging processes and frailty, as underlying determinants and possible early preclinical diagnostic measures for aging-related diseases. Generally, the use of information-theoretical analysis permits not only establishing the (non-linear) correlations between diagnostic or therapeutic parameters of interest, but may also provide a theoretical insight into the nature of aging and related diseases by establishing the measures of variability, adaptation, regulation or homeostasis, within a system of interest. It may be hoped that the increased use of such measures in research may considerably increase diagnostic and therapeutic capabilities and the fundamental theoretical mathematical understanding of aging and disease. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Measuring change for a multidimensional test using a generalized explanatory longitudinal item response model.

    PubMed

    Cho, Sun-Joo; Athay, Michele; Preacher, Kristopher J

    2013-05-01

    Even though many educational and psychological tests are known to be multidimensional, little research has been done to address how to measure individual differences in change within an item response theory framework. In this paper, we suggest a generalized explanatory longitudinal item response model to measure individual differences in change. New longitudinal models for multidimensional tests and existing models for unidimensional tests are presented within this framework and implemented with software developed for generalized linear models. In addition to the measurement of change, the longitudinal models we present can also be used to explain individual differences in change scores for person groups (e.g., learning disabled students versus non-learning disabled students) and to model differences in item difficulties across item groups (e.g., number operation, measurement, and representation item groups in a mathematics test). An empirical example illustrates the use of the various models for measuring individual differences in change when there are person groups and multiple skill domains which lead to multidimensionality at a time point. © 2012 The British Psychological Society.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madronich, Sasha; Kleinman, Larry; Conley, Andrew

    Gas-to-particle partitioning of organic aerosols (OA) is represented in most models by Raoult’s law, and depends on the existing mass of particles into which organic gases can dissolve. This raises the possibility of non-linear response of particle-phase OA to the emissions of precursor volatile organic compounds (VOCs) that contribute to this partitioning mass. Implications for air quality management are evident: A strong non-linear dependence would suggest that reductions in VOC emission would have a more-than-proportionate benefit in lowering ambient OA concentrations. Chamber measurements on simple VOC mixtures generally confirm the non-linear scaling between OA and VOCs, usually stated as amore » mass-dependence of the measured OA yields. However, for realistic ambient conditions including urban settings, no single component dominates the composition of the organic particles, and deviations from linearity are presumed to be small. Here we re-examine the linearity question using volatility spectra from several sources: (1) chamber studies of selected aerosols, (2) volatility inferred for aerosols sampled in two megacities, Mexico City and Paris, and (3) an explicit chemistry model (GECKO-A). These few available volatility distributions suggest that urban OA may be only slightly super-linear, with most values of the sensitivity exponent in the range 1.1-1.3, also substantially lower than seen in chambers for some specific aerosols. Furthermore, the rather low values suggest that OA concentrations in megacities are not an inevitable convergence of non-linear effects, but can be addressed (much like in smaller urban areas) by proportionate reductions in emissions.« less

  15. The concept of collision strength and its applications

    NASA Astrophysics Data System (ADS)

    Chang, Yongbin

    Collision strength, the measure of strength for a binary collision, hasn't been defined clearly. In practice, many physical arguments have been employed for the purpose and taken for granted. A scattering angle has been widely and intensively used as a measure of collision strength in plasma physics for years. The result of this is complication and unnecessary approximation in deriving some of the basic kinetic equations and in calculating some of the basic physical terms. The Boltzmann equation has a five-fold integral collision term that is complicated. Chandrasekhar and Spitzer's approaches to the linear Fokker-Planck coefficients have several approximations. An effective variable-change technique has been developed in this dissertation as an alternative to scattering angle as the measure of collision strength. By introducing the square of the reduced impulse or its equivalencies as a collision strength variable, many plasma calculations have been simplified. The five-fold linear Boltzmann collision integral and linearized Boltzmann collision integral are simplified to three-fold integrals. The arbitrary order linear Fokker-Planck coefficients are calculated and expressed in a uniform expression. The new theory provides a simple and exact method for describing the equilibrium plasma collision rate, and a precise calculation of the equilibrium relaxation time. It generalizes bimolecular collision reaction rate theory to a reaction rate theory for plasmas. A simple formula of high precision with wide temperature range has been developed for electron impact ionization rates for carbon atoms and ions. The universality of the concept of collision strength is emphasized. This dissertation will show how Arrhenius' chemical reaction rate theory and Thomson's ionization theory can be unified as one single theory under the concept of collision strength, and how many important physical terms in different disciplines, such as activation energy in chemical reaction theory, ionization energy in Thomson's ionization theory, and the Coulomb logarithm in plasma physics, can be unified into a single one---the threshold value of collision strength. The collision strength, which is a measure of a transfer of momentum in units of energy, can be used to reconcile the differences between Descartes' opinion and Leibnitz's opinion about the "true" measure of a force. Like Newton's second law, which provides an instantaneous measure of a force, collision strength, as a cumulative measure of a force, can be regarded as part of a law of force in general.

  16. UAV Swarm Tactics: An Agent-Based Simulation and Markov Process Analysis

    DTIC Science & Technology

    2013-06-01

    CRN Common Random Numbers CSV Comma Separated Values DoE Design of Experiment GLM Generalized Linear Model HVT High Value Target JAR Java ARchive JMF... Java Media Framework JRE Java runtime environment Mason Multi-Agent Simulator Of Networks MOE Measure Of Effectiveness MOP Measures Of Performance...with every set several times, and to write a CSV file with the results. Rather than scripting the agent behavior deterministically, the agents should

  17. Linear units improve articulation between social and physical constructs: An example from caregiver parameterization for children supported by complex medical technologies

    NASA Astrophysics Data System (ADS)

    Bezruczko, N.; Stanley, T.; Battle, M.; Latty, C.

    2016-11-01

    Despite broad sweeping pronouncements by international research organizations that social sciences are being integrated into global research programs, little attention has been directed toward obstacles blocking productive collaborations. In particular, social sciences routinely implement nonlinear, ordinal measures, which fundamentally inhibit integration with overarching scientific paradigms. The widely promoted general linear model in contemporary social science methods is largely based on untransformed scores and ratings, which are neither objective nor linear. This issue has historically separated physical and social sciences, which this report now asserts is unnecessary. In this research, nonlinear, subjective caregiver ratings of confidence to care for children supported by complex, medical technologies were transformed to an objective scale defined by logits (N=70). Transparent linear units from this transformation provided foundational insights into measurement properties of a social- humanistic caregiving construct, which clarified physical and social caregiver implications. Parameterized items and ratings were also subjected to multivariate hierarchical analysis, then decomposed to demonstrate theoretical coherence (R2 >.50), which provided further support for convergence of mathematical parameterization, physical expectations, and a social-humanistic construct. These results present substantial support for improving integration of social sciences with contemporary scientific research programs by emphasizing construction of common variables with objective, linear units.

  18. Short Round Sub-Linear Zero-Knowledge Argument for Linear Algebraic Relations

    NASA Astrophysics Data System (ADS)

    Seo, Jae Hong

    Zero-knowledge arguments allows one party to prove that a statement is true, without leaking any other information than the truth of the statement. In many applications such as verifiable shuffle (as a practical application) and circuit satisfiability (as a theoretical application), zero-knowledge arguments for mathematical statements related to linear algebra are essentially used. Groth proposed (at CRYPTO 2009) an elegant methodology for zero-knowledge arguments for linear algebraic relations over finite fields. He obtained zero-knowledge arguments of the sub-linear size for linear algebra using reductions from linear algebraic relations to equations of the form z = x *' y, where x, y ∈ Fnp are committed vectors, z ∈ Fp is a committed element, and *' : Fnp × Fnp → Fp is a bilinear map. These reductions impose additional rounds on zero-knowledge arguments of the sub-linear size. The round complexity of interactive zero-knowledge arguments is an important measure along with communication and computational complexities. We focus on minimizing the round complexity of sub-linear zero-knowledge arguments for linear algebra. To reduce round complexity, we propose a general transformation from a t-round zero-knowledge argument, satisfying mild conditions, to a (t - 2)-round zero-knowledge argument; this transformation is of independent interest.

  19. Action-angle formulation of generalized, orbit-based, fast-ion diagnostic weight functions

    NASA Astrophysics Data System (ADS)

    Stagner, L.; Heidbrink, W. W.

    2017-09-01

    Due to the usually complicated and anisotropic nature of the fast-ion distribution function, diagnostic velocity-space weight functions, which indicate the sensitivity of a diagnostic to different fast-ion velocities, are used to facilitate the analysis of experimental data. Additionally, when velocity-space weight functions are discretized, a linear equation relating the fast-ion density and the expected diagnostic signal is formed. In a technique known as velocity-space tomography, many measurements can be combined to create an ill-conditioned system of linear equations that can be solved using various computational methods. However, when velocity-space weight functions (which by definition ignore spatial dependencies) are used, velocity-space tomography is restricted, both by the accuracy of its forward model and also by the availability of spatially overlapping diagnostic measurements. In this work, we extend velocity-space weight functions to a full 6D generalized coordinate system and then show how to reduce them to a 3D orbit-space without loss of generality using an action-angle formulation. Furthermore, we show how diagnostic orbit-weight functions can be used to infer the full fast-ion distribution function, i.e., orbit tomography. In depth derivations of orbit weight functions for the neutron, neutral particle analyzer, and fast-ion D-α diagnostics are also shown.

  20. Baryonic Force for Accelerated Cosmic Expansion and Generalized U1b Gauge Symmetry in Particle-Cosmology

    NASA Astrophysics Data System (ADS)

    Khan, Mehbub; Hao, Yun; Hsu, Jong-Ping

    2018-01-01

    Based on baryon charge conservation and a generalized Yang-Mills symmetry for Abelian (and non-Abelian) groups, we discuss a new baryonic gauge field and its linear potential for two point-like baryon charges. The force between two point-like baryons is repulsive, extremely weak and independent of distance. However, for two extended baryonic systems, we have a dominant linear force α r. Thus, only in the later stage of the cosmic evolution, when two baryonic galaxies are separated by an extremely large distance, the new repulsive baryonic force can overcome the gravitational attractive force. Such a model provides a gauge-field-theoretic understanding of the late-time accelerated cosmic expansion. The baryonic force can be tested by measuring the accelerated Wu-Doppler frequency shifts of supernovae at different distances.

  1. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    NASA Astrophysics Data System (ADS)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  2. Resource Theory of Superposition

    NASA Astrophysics Data System (ADS)

    Theurer, T.; Killoran, N.; Egloff, D.; Plenio, M. B.

    2017-12-01

    The superposition principle lies at the heart of many nonclassical properties of quantum mechanics. Motivated by this, we introduce a rigorous resource theory framework for the quantification of superposition of a finite number of linear independent states. This theory is a generalization of resource theories of coherence. We determine the general structure of operations which do not create superposition, find a fundamental connection to unambiguous state discrimination, and propose several quantitative superposition measures. Using this theory, we show that trace decreasing operations can be completed for free which, when specialized to the theory of coherence, resolves an outstanding open question and is used to address the free probabilistic transformation between pure states. Finally, we prove that linearly independent superposition is a necessary and sufficient condition for the faithful creation of entanglement in discrete settings, establishing a strong structural connection between our theory of superposition and entanglement theory.

  3. Tests of Alignment among Assessment, Standards, and Instruction Using Generalized Linear Model Regression

    ERIC Educational Resources Information Center

    Fulmer, Gavin W.; Polikoff, Morgan S.

    2014-01-01

    An essential component in school accountability efforts is for assessments to be well-aligned with the standards or curriculum they are intended to measure. However, relatively little prior research has explored methods to determine statistical significance of alignment or misalignment. This study explores analyses of alignment as a special case…

  4. Analysis of Binary Adherence Data in the Setting of Polypharmacy: A Comparison of Different Approaches

    PubMed Central

    Esserman, Denise A.; Moore, Charity G.; Roth, Mary T.

    2009-01-01

    Older community dwelling adults often take multiple medications for numerous chronic diseases. Non-adherence to these medications can have a large public health impact. Therefore, the measurement and modeling of medication adherence in the setting of polypharmacy is an important area of research. We apply a variety of different modeling techniques (standard linear regression; weighted linear regression; adjusted linear regression; naïve logistic regression; beta-binomial (BB) regression; generalized estimating equations (GEE)) to binary medication adherence data from a study in a North Carolina based population of older adults, where each medication an individual was taking was classified as adherent or non-adherent. In addition, through simulation we compare these different methods based on Type I error rates, bias, power, empirical 95% coverage, and goodness of fit. We find that estimation and inference using GEE is robust to a wide variety of scenarios and we recommend using this in the setting of polypharmacy when adherence is dichotomously measured for multiple medications per person. PMID:20414358

  5. FDATMOS16 non-linear partitioning and organic volatility distributions in urban aerosols

    DOE PAGES

    Madronich, Sasha; Kleinman, Larry; Conley, Andrew; ...

    2015-12-17

    Gas-to-particle partitioning of organic aerosols (OA) is represented in most models by Raoult’s law, and depends on the existing mass of particles into which organic gases can dissolve. This raises the possibility of non-linear response of particle-phase OA to the emissions of precursor volatile organic compounds (VOCs) that contribute to this partitioning mass. Implications for air quality management are evident: A strong non-linear dependence would suggest that reductions in VOC emission would have a more-than-proportionate benefit in lowering ambient OA concentrations. Chamber measurements on simple VOC mixtures generally confirm the non-linear scaling between OA and VOCs, usually stated as amore » mass-dependence of the measured OA yields. However, for realistic ambient conditions including urban settings, no single component dominates the composition of the organic particles, and deviations from linearity are presumed to be small. Here we re-examine the linearity question using volatility spectra from several sources: (1) chamber studies of selected aerosols, (2) volatility inferred for aerosols sampled in two megacities, Mexico City and Paris, and (3) an explicit chemistry model (GECKO-A). These few available volatility distributions suggest that urban OA may be only slightly super-linear, with most values of the sensitivity exponent in the range 1.1-1.3, also substantially lower than seen in chambers for some specific aerosols. Furthermore, the rather low values suggest that OA concentrations in megacities are not an inevitable convergence of non-linear effects, but can be addressed (much like in smaller urban areas) by proportionate reductions in emissions.« less

  6. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  7. An Overview of Longitudinal Data Analysis Methods for Neurological Research

    PubMed Central

    Locascio, Joseph J.; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825

  8. HIGH-TIME-RESOLUTION MEASUREMENTS OF THE POLARIZATION OF THE CRAB PULSAR AT 1.38 GHz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Słowikowska, Agnieszka; Stappers, Benjamin W.; Harding, Alice K.

    2015-01-20

    Using the Westerbork Synthesis Radio Telescope, we obtained high-time-resolution measurements of the full polarization of the Crab pulsar. At a resolution of 1/8192 of the 34 ms pulse period (i.e., 4.1 μs), the 1.38 GHz linear-polarization measurements are in general agreement with previous lower-time-resolution 1.4 GHz measurements of linear polarization in the main pulse (MP), in the interpulse (IP), and in the low-frequency component (LFC). We find the MP and IP to be linearly polarized at about 24% and 21% with no discernible difference in polarization position angle. However, contrary to theoretical expectations and measurements in the visible, we find nomore » evidence for significant variation (sweep) in the polarization position angle over the MP, the IP, or the LFC. We discuss the implications, which appear to be in contradiction to theoretical expectations. We also detect weak circular polarization in the MP and IP, and strong (≈20%) circular polarization in the LFC, which also exhibits very strong (≈98%) linear polarization at a position angle of 40° from that of the MP or IP. The properties are consistent with the LFC, which is a low-altitude component, and the MP and IP, which are high-altitude caustic components. Current models for the MP and IP emission do not readily account for the absence of pronounced polarization changes across the pulse. We measure IP and LFC pulse phases relative to the MP consistent with recent measurements, which have shown that the phases of these pulse components are evolving with time.« less

  9. Force sensing using 3D displacement measurements in linear elastic bodies

    NASA Astrophysics Data System (ADS)

    Feng, Xinzeng; Hui, Chung-Yuen

    2016-07-01

    In cell traction microscopy, the mechanical forces exerted by a cell on its environment is usually determined from experimentally measured displacement by solving an inverse problem in elasticity. In this paper, an innovative numerical method is proposed which finds the "optimal" traction to the inverse problem. When sufficient regularization is applied, we demonstrate that the proposed method significantly improves the widely used approach using Green's functions. Motivated by real cell experiments, the equilibrium condition of a slowly migrating cell is imposed as a set of equality constraints on the unknown traction. Our validation benchmarks demonstrate that the numeric solution to the constrained inverse problem well recovers the actual traction when the optimal regularization parameter is used. The proposed method can thus be applied to study general force sensing problems, which utilize displacement measurements to sense inaccessible forces in linear elastic bodies with a priori constraints.

  10. Analysis of Cross-Sectional Univariate Measurements for Family Dyads Using Linear Mixed Modeling

    PubMed Central

    Knafl, George J.; Dixon, Jane K.; O'Malley, Jean P.; Grey, Margaret; Deatrick, Janet A.; Gallo, Agatha M.; Knafl, Kathleen A.

    2010-01-01

    Outcome measurements from members of the same family are likely correlated. Such intrafamilial correlation (IFC) is an important dimension of the family as a unit but is not always accounted for in analyses of family data. This article demonstrates the use of linear mixed modeling to account for IFC in the important special case of univariate measurements for family dyads collected at a single point in time. Example analyses of data from partnered parents having a child with a chronic condition on their child's adaptation to the condition and on the family's general functioning and management of the condition are provided. Analyses of this kind are reasonably straightforward to generate with popular statistical tools. Thus, it is recommended that IFC be reported as standard practice reflecting the fact that a family dyad is more than just the aggregate of two individuals. Moreover, not accounting for IFC can affect the conclusions. PMID:19307316

  11. Prediction of High-Lift Flows using Turbulent Closure Models

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.; Ying, Susan X.; Bertelrud, Arild

    1997-01-01

    The flow over two different multi-element airfoil configurations is computed using linear eddy viscosity turbulence models and a nonlinear explicit algebraic stress model. A subset of recently-measured transition locations using hot film on a McDonnell Douglas configuration is presented, and the effect of transition location on the computed solutions is explored. Deficiencies in wake profile computations are found to be attributable in large part to poor boundary layer prediction on the generating element, and not necessarily inadequate turbulence modeling in the wake. Using measured transition locations for the main element improves the prediction of its boundary layer thickness, skin friction, and wake profile shape. However, using measured transition locations on the slat still yields poor slat wake predictions. The computation of the slat flow field represents a key roadblock to successful predictions of multi-element flows. In general, the nonlinear explicit algebraic stress turbulence model gives very similar results to the linear eddy viscosity models.

  12. Generalization of the swelling method to measure the intrinsic curvature of lipids

    NASA Astrophysics Data System (ADS)

    Barragán Vidal, I. A.; Müller, M.

    2017-12-01

    Via computer simulation of a coarse-grained model of two-component lipid bilayers, we compare two methods of measuring the intrinsic curvatures of the constituting monolayers. The first one is a generalization of the swelling method that, in addition to the assumption that the spontaneous curvature linearly depends on the composition of the lipid mixture, incorporates contributions from its elastic energy. The second method measures the effective curvature-composition coupling between the apposing leaflets of bilayer structures (planar bilayers or cylindrical tethers) to extract the spontaneous curvature. Our findings demonstrate that both methods yield consistent results. However, we highlight that the two-leaflet structure inherent to the latter method has the advantage of allowing measurements for mixed lipid systems up to their critical point of demixing as well as in the regime of high concentration (of either species).

  13. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    PubMed

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  14. Metric Learning for Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  15. Measurement Consistency from Magnetic Resonance Images

    PubMed Central

    Chung, Dongjun; Chung, Moo K.; Durtschi, Reid B.; Lindell, R. Gentry; Vorperian, Houri K.

    2010-01-01

    Rationale and Objectives In quantifying medical images, length-based measurements are still obtained manually. Due to possible human error, a measurement protocol is required to guarantee the consistency of measurements. In this paper, we review various statistical techniques that can be used in determining measurement consistency. The focus is on detecting a possible measurement bias and determining the robustness of the procedures to outliers. Materials and Methods We review correlation analysis, linear regression, Bland-Altman method, paired t-test, and analysis of variance (ANOVA). These techniques were applied to measurements, obtained by two raters, of head and neck structures from magnetic resonance images (MRI). Results The correlation analysis and the linear regression were shown to be insufficient for detecting measurement inconsistency. They are also very sensitive to outliers. The widely used Bland-Altman method is a visualization technique so it lacks the numerical quantification. The paired t-test tends to be sensitive to small measurement bias. On the other hand, ANOVA performs well even under small measurement bias. Conclusion In almost all cases, using only one method is insufficient and it is recommended to use several methods simultaneously. In general, ANOVA performs the best. PMID:18790405

  16. Review and evaluation of recent developments in melic inlet dynamic flow distortion prediction and computer program documentation and user's manual estimating maximum instantaneous inlet flow distortion from steady-state total pressure measurements with full, limited, or no dynamic data

    NASA Technical Reports Server (NTRS)

    Schweikhard, W. G.; Dennon, S. R.

    1986-01-01

    A review of the Melick method of inlet flow dynamic distortion prediction by statistical means is provided. These developments include the general Melick approach with full dynamic measurements, a limited dynamic measurement approach, and a turbulence modelling approach which requires no dynamic rms pressure fluctuation measurements. These modifications are evaluated by comparing predicted and measured peak instantaneous distortion levels from provisional inlet data sets. A nonlinear mean-line following vortex model is proposed and evaluated as a potential criterion for improving the peak instantaneous distortion map generated from the conventional linear vortex of the Melick method. The model is simplified to a series of linear vortex segments which lay along the mean line. Maps generated with this new approach are compared with conventionally generated maps, as well as measured peak instantaneous maps. Inlet data sets include subsonic, transonic, and supersonic inlets under various flight conditions.

  17. Analysis of JPSS J1 VIIRS Polarization Sensitivity Using the NIST T-SIRCUS

    NASA Technical Reports Server (NTRS)

    McIntire, Jeffrey W.; Young, James B.; Moyer, David; Waluschka, Eugene; Oudrari, Hassan; Xiong, Xiaoxiong

    2015-01-01

    The polarization sensitivity of the Joint Polar Satellite System (JPSS) J1 Visible Infrared Imaging Radiometer Suite (VIIRS) measured pre-launch using a broadband source was observed to be larger than expected for many reflective bands. Ray trace modeling predicted that the observed polarization sensitivity was the result of larger diattenuation at the edges of the focal plane filter spectral bandpass. Additional ground measurements were performed using a monochromatic source (the NIST T-SIRCUS) to input linearly polarized light at a number of wavelengths across the bandpass of two VIIRS spectral bands and two scan angles. This work describes the data processing, analysis, and results derived from the T-SIRCUS measurements, comparing them with broadband measurements. Results have shown that the observed degree of linear polarization, when weighted by the sensor's spectral response function, is generally larger on the edges and smaller in the center of the spectral bandpass, as predicted. However, phase angle changes in the center of the bandpass differ between model and measurement. Integration of the monochromatic polarization sensitivity over wavelength produced results consistent with the broadband source measurements, for all cases considered.

  18. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    PubMed

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Modelling leaf photosynthetic and transpiration temperature-dependent responses in Vitis vinifera cv. Semillon grapevines growing in hot, irrigated vineyard conditions

    PubMed Central

    Greer, Dennis H.

    2012-01-01

    Background and aims Grapevines growing in Australia are often exposed to very high temperatures and the question of how the gas exchange processes adjust to these conditions is not well understood. The aim was to develop a model of photosynthesis and transpiration in relation to temperature to quantify the impact of the growing conditions on vine performance. Methodology Leaf gas exchange was measured along the grapevine shoots in accordance with their growth and development over several growing seasons. Using a general linear statistical modelling approach, photosynthesis and transpiration were modelled against leaf temperature separated into bands and the model parameters and coefficients applied to independent datasets to validate the model. Principal results Photosynthesis, transpiration and stomatal conductance varied along the shoot, with early emerging leaves having the highest rates, but these declined as later emerging leaves increased their gas exchange capacities in accordance with development. The general linear modelling approach applied to these data revealed that photosynthesis at each temperature was additively dependent on stomatal conductance, internal CO2 concentration and photon flux density. The temperature-dependent coefficients for these parameters applied to other datasets gave a predicted rate of photosynthesis that was linearly related to the measured rates, with a 1 : 1 slope. Temperature-dependent transpiration was multiplicatively related to stomatal conductance and the leaf to air vapour pressure deficit and applying the coefficients also showed a highly linear relationship, with a 1 : 1 slope between measured and modelled rates, when applied to independent datasets. Conclusions The models developed for the grapevines were relatively simple but accounted for much of the seasonal variation in photosynthesis and transpiration. The goodness of fit in each case demonstrated that explicitly selecting leaf temperature as a model parameter, rather than including temperature intrinsically as is usually done in more complex models, was warranted. PMID:22567220

  20. Radar orthogonality and radar length in Finsler and metric spacetime geometry

    NASA Astrophysics Data System (ADS)

    Pfeifer, Christian

    2014-09-01

    The radar experiment connects the geometry of spacetime with an observers measurement of spatial length. We investigate the radar experiment on Finsler spacetimes which leads to a general definition of radar orthogonality and radar length. The directions radar orthogonal to an observer form the spatial equal time surface an observer experiences and the radar length is the physical length the observer associates to spatial objects. We demonstrate these concepts on a forth order polynomial Finsler spacetime geometry which may emerge from area metric or premetric linear electrodynamics or in quantum gravity phenomenology. In an explicit generalization of Minkowski spacetime geometry we derive the deviation from the Euclidean spatial length measure in an observers rest frame explicitly.

  1. Simulating of the measurement-device independent quantum key distribution with phase randomized general sources

    PubMed Central

    Wang, Qin; Wang, Xiang-Bin

    2014-01-01

    We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000

  2. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  3. Schottky Noise and Beam Transfer Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaskiewicz, M.

    2016-12-01

    Beam transfer functions (BTF)s encapsulate the stability properties of charged particle beams. In general one excites the beam with a sinusoidal signal and measures the amplitude and phase of the beam response. Most systems are very nearly linear and one can use various Fourier techniques to reduce the number of measurements and/or simulations needed to fully characterize the response. Schottky noise is associated with the finite number of particles in the beam. This signal is always present. Since the Schottky current drives wakefields, the measured Schottky signal is influenced by parasitic impedances.

  4. Comparison of transcoelomic, contrast transcoelomic, and transesophageal echocardiography in anesthetized red-tailed hawks (Buteo jamaicensis).

    PubMed

    Beaufrère, Hugues; Pariaut, Romain; Rodriguez, Daniel; Nevarez, Javier G; Tully, Thomas N

    2012-10-01

    To assess the agreement and reliability of cardiac measurements obtained with 3 echocardiographic techniques in anesthetized red-tailed hawks (Buteo jamaicensis). 10 red-tailed hawks. Transcoelomic, contrast transcoelomic, and transesophageal echocardiographic evaluations of the hawks were performed, and cineloops of imaging planes were recorded. Three observers performed echocardiographic measurements of cardiac variables 3 times on 3 days. The order in which hawks were assessed and echocardiographic techniques were used was randomized. Results were analyzed with linear mixed modeling, agreement was assessed with intraclass correlation coefficients, and variation was estimated with coefficients of variation. Significant differences were evident among the 3 echocardiographic methods for most measurements, and the agreement among findings was generally low. Interobserver agreement was generally low to medium. Intraobserver agreement was generally medium to high. Overall, better agreement was achieved for the left ventricular measurements and for the transesophageal approach than for other measurements and techniques. Echocardiographic measurements in hawks were not reliable, except when the left ventricle was measured by the same observer. Furthermore, cardiac morphometric measurements may not be clinically important. When measurements are required, one needs to consider that follow-up measurements should be performed by the same echocardiographer and should show at least a 20% difference from initial measurements to be confident that any difference is genuine.

  5. Does competition improve financial stability of the banking sector in ASEAN countries? An empirical analysis.

    PubMed

    Noman, Abu Hanifa Md; Gee, Chan Sok; Isa, Che Ruhana

    2017-01-01

    This study examines the influence of competition on the financial stability of the commercial banks of Association of Southeast Asian Nation (ASEAN) over the 1990 to 2014 period. Panzar-Rosse H-statistic, Lerner index and Herfindahl-Hirschman Index (HHI) are used as measures of competition, while Z-score, non-performing loan (NPL) ratio and equity ratio are used as measures of financial stability. Two-step system Generalized Method of Moments (GMM) estimates demonstrate that competition measured by H-statistic is positively related to Z-score and equity ratio, and negatively related to non-performing loan ratio. Conversely, market power measured by Lerner index is negatively related to Z-score and equity ratio and positively related to NPL ratio. These results strongly support the competition-stability view for ASEAN banks. We also capture the non-linear relationship between competition and financial stability by incorporating a quadratic term of competition in our models. The results show that the coefficient of the quadratic term of H-statistic is negative for the Z-score model given a positive coefficient of the linear term in the same model. These results support the non-linear relationship between competition and financial stability of the banking sector. The study contains significant policy implications for improving the financial stability of the commercial banks.

  6. Does competition improve financial stability of the banking sector in ASEAN countries? An empirical analysis

    PubMed Central

    Gee, Chan Sok; Isa, Che Ruhana

    2017-01-01

    This study examines the influence of competition on the financial stability of the commercial banks of Association of Southeast Asian Nation (ASEAN) over the 1990 to 2014 period. Panzar-Rosse H-statistic, Lerner index and Herfindahl-Hirschman Index (HHI) are used as measures of competition, while Z-score, non-performing loan (NPL) ratio and equity ratio are used as measures of financial stability. Two-step system Generalized Method of Moments (GMM) estimates demonstrate that competition measured by H-statistic is positively related to Z-score and equity ratio, and negatively related to non-performing loan ratio. Conversely, market power measured by Lerner index is negatively related to Z-score and equity ratio and positively related to NPL ratio. These results strongly support the competition-stability view for ASEAN banks. We also capture the non-linear relationship between competition and financial stability by incorporating a quadratic term of competition in our models. The results show that the coefficient of the quadratic term of H-statistic is negative for the Z-score model given a positive coefficient of the linear term in the same model. These results support the non-linear relationship between competition and financial stability of the banking sector. The study contains significant policy implications for improving the financial stability of the commercial banks. PMID:28486548

  7. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  8. A theoretical and experimental investigation of the linear and nonlinear impulse responses from a magnetoplasma column

    NASA Technical Reports Server (NTRS)

    Grody, N. C.

    1973-01-01

    Linear and nonlinear responses of a magnetoplasma resulting from inhomogeneity in the background plasma density are studied. The plasma response to an impulse electric field was measured and the results are compared with the theory of an inhomogeneous cold plasma. Impulse responses were recorded for the different plasma densities, static magnetic fields, and neutral pressures and generally appeared as modulated, damped oscillations. The frequency spectra of the waveforms consisted of two separated resonance peaks. For weak excitation, the results correlate with the linear theory of a cold, inhomogeneous, cylindrical magnetoplasma. The damping mechanism is identified with that of phase mixing due to inhomogeneity in plasma density. With increasing excitation voltage, the nonlinear impulse responses display stronger damping and a small increase in the frequency of oscillation.

  9. Structural stability of nonlinear population dynamics.

    PubMed

    Cenci, Simone; Saavedra, Serguei

    2018-01-01

    In population dynamics, the concept of structural stability has been used to quantify the tolerance of a system to environmental perturbations. Yet, measuring the structural stability of nonlinear dynamical systems remains a challenging task. Focusing on the classic Lotka-Volterra dynamics, because of the linearity of the functional response, it has been possible to measure the conditions compatible with a structurally stable system. However, the functional response of biological communities is not always well approximated by deterministic linear functions. Thus, it is unclear the extent to which this linear approach can be generalized to other population dynamics models. Here, we show that the same approach used to investigate the classic Lotka-Volterra dynamics, which is called the structural approach, can be applied to a much larger class of nonlinear models. This class covers a large number of nonlinear functional responses that have been intensively investigated both theoretically and experimentally. We also investigate the applicability of the structural approach to stochastic dynamical systems and we provide a measure of structural stability for finite populations. Overall, we show that the structural approach can provide reliable and tractable information about the qualitative behavior of many nonlinear dynamical systems.

  10. Structural stability of nonlinear population dynamics

    NASA Astrophysics Data System (ADS)

    Cenci, Simone; Saavedra, Serguei

    2018-01-01

    In population dynamics, the concept of structural stability has been used to quantify the tolerance of a system to environmental perturbations. Yet, measuring the structural stability of nonlinear dynamical systems remains a challenging task. Focusing on the classic Lotka-Volterra dynamics, because of the linearity of the functional response, it has been possible to measure the conditions compatible with a structurally stable system. However, the functional response of biological communities is not always well approximated by deterministic linear functions. Thus, it is unclear the extent to which this linear approach can be generalized to other population dynamics models. Here, we show that the same approach used to investigate the classic Lotka-Volterra dynamics, which is called the structural approach, can be applied to a much larger class of nonlinear models. This class covers a large number of nonlinear functional responses that have been intensively investigated both theoretically and experimentally. We also investigate the applicability of the structural approach to stochastic dynamical systems and we provide a measure of structural stability for finite populations. Overall, we show that the structural approach can provide reliable and tractable information about the qualitative behavior of many nonlinear dynamical systems.

  11. Effect Size Measure and Analysis of Single Subject Designs

    ERIC Educational Resources Information Center

    Society for Research on Educational Effectiveness, 2013

    2013-01-01

    One of the vexing problems in the analysis of SSD is in the assessment of the effect of intervention. Serial dependence notwithstanding, the linear model approach that has been advanced involves, in general, the fitting of regression lines (or curves) to the set of observations within each phase of the design and comparing the parameters of these…

  12. Alternative Models for Small Samples in Psychological Research: Applying Linear Mixed Effects Models and Generalized Estimating Equations to Repeated Measures Data

    ERIC Educational Resources Information Center

    Muth, Chelsea; Bales, Karen L.; Hinde, Katie; Maninger, Nicole; Mendoza, Sally P.; Ferrer, Emilio

    2016-01-01

    Unavoidable sample size issues beset psychological research that involves scarce populations or costly laboratory procedures. When incorporating longitudinal designs these samples are further reduced by traditional modeling techniques, which perform listwise deletion for any instance of missing data. Moreover, these techniques are limited in their…

  13. Cumulative Repetition Effects across Multiple Readings of a Word: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Kamienkowski, Juan E.; Carbajal, M. Julia; Bianchi, Bruno; Sigman, Mariano; Shalom, Diego E.

    2018-01-01

    When a word is read more than once, reading time generally decreases in the successive occurrences. This Repetition Effect has been used to study word encoding and memory processes in a variety of experimental measures. We studied naturally occurring repetitions of words within normal texts (stories of around 3,000 words). Using linear mixed…

  14. Effective Use of Multimedia Presentations to Maximize Learning within High School Science Classrooms

    ERIC Educational Resources Information Center

    Rapp, Eric

    2013-01-01

    This research used an evidenced-based experimental 2 x 2 factorial design General Linear Model with Repeated Measures Analysis of Covariance (RMANCOVA). For this analysis, time served as the within-subjects factor while treatment group (i.e., static and signaling, dynamic and signaling, static without signaling, and dynamic without signaling)…

  15. Mathematics. Unit 6: A Core Curriculum of Related Instruction for Apprentices.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Bureau of Occupational and Career Curriculum Development.

    The mathematics unit is presented to assist apprentices to acquire a general knowledge of mathematic skills. The unit consists of nine modules: (1) basic addition, subtraction, multiplication, and division; (2) conventional linear measure; (3) using the metric system, (4) steps to take in solving problems, (5) how to calculate areas and volumes,…

  16. A comparative study of minimum norm inverse methods for MEG imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leahy, R.M.; Mosher, J.C.; Phillips, J.W.

    1996-07-01

    The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less

  17. Otoacoustic emissions in the general adult population of Nord-Trøndelag, Norway: III. Relationships with pure-tone hearing thresholds.

    PubMed

    Engdahl, Bo; Tambs, Kristian; Borchgrevink, Hans M; Hoffman, Howard J

    2005-01-01

    This study aims to describe the association between otoacoustic emissions (OAEs) and pure-tone hearing thresholds (PTTs) in an unscreened adult population (N =6415), to determine the efficiency by which TEOAEs and DPOAEs can identify ears with elevated PTTs, and to investigate whether a combination of DPOAE and TEOAE responses improves this performance. Associations were examined by linear regression analysis and ANOVA. Test performance was assessed by receiver operator characteristic (ROC) curves. The relation between OAEs and PTTs appeared curvilinear with a moderate degree of non-linearity. Combining DPOAEs and TEOAEs improved performance. Test performance depended on the cut-off thresholds defining elevated PTTs with optimal values between 25 and 45 dB HL, depending on frequency and type of OAE measure. The unique constitution of the present large sample, which reflects the general adult population, makes these results applicable to population-based studies and screening programs.

  18. Quantitative genetic properties of four measures of deformity in yellowtail kingfish Seriola lalandi Valenciennes, 1833.

    PubMed

    Nguyen, N H; Whatmore, P; Miller, A; Knibb, W

    2016-02-01

    The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.

  19. Upper arm elevation and repetitive shoulder movements: a general population job exposure matrix based on expert ratings and technical measurements.

    PubMed

    Dalbøge, Annett; Hansson, Gert-Åke; Frost, Poul; Andersen, Johan Hviid; Heilskov-Hansen, Thomas; Svendsen, Susanne Wulff

    2016-08-01

    We recently constructed a general population job exposure matrix (JEM), The Shoulder JEM, based on expert ratings. The overall aim of this study was to convert expert-rated job exposures for upper arm elevation and repetitive shoulder movements to measurement scales. The Shoulder JEM covers all Danish occupational titles, divided into 172 job groups. For 36 of these job groups, we obtained technical measurements (inclinometry) of upper arm elevation and repetitive shoulder movements. To validate the expert-rated job exposures against the measured job exposures, we used Spearman rank correlations and the explained variance[Formula: see text] according to linear regression analyses (36 job groups). We used the linear regression equations to convert the expert-rated job exposures for all 172 job groups into predicted measured job exposures. Bland-Altman analyses were used to assess the agreement between the predicted and measured job exposures. The Spearman rank correlations were 0.63 for upper arm elevation and 0.64 for repetitive shoulder movements. The expert-rated job exposures explained 64% and 41% of the variance of the measured job exposures, respectively. The corresponding calibration equations were y=0.5%time+0.16×expert rating and y=27°/s+0.47×expert rating. The mean differences between predicted and measured job exposures were zero due to calibration; the 95% limits of agreement were ±2.9% time for upper arm elevation >90° and ±33°/s for repetitive shoulder movements. The updated Shoulder JEM can be used to present exposure-response relationships on measurement scales. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  1. Excitation correlation photoluminescence in the presence of Shockley-Read-Hall recombination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borgwardt, M., E-mail: mario.borgwardt@helmholtz-berlin.de; Sippel, P.; Eichberger, R.

    Excitation correlation photoluminescence (ECPL) measurements are often analyzed in the approximation of a cross correlation of charge carrier populations generated by the two delayed pulses. In semiconductors, this approach is valid for a linear non-radiative recombination path, but not for a non-linear recombination rate as in the general Shockley-Read-Hall recombination scenario. Here, the evolution of the ECPL signal was studied for deep trap recombination following Shockley-Read-Hall statistics. Analytic solutions can be obtained for a fast minority trapping regime and steady state recombination. For the steady state case, our results show that the quadratic radiative term plays only a minor role,more » and that the shape of the measured signal is mostly determined by the non-linearity of the recombination itself. We find that measurements with unbalanced intense pump and probe pulses can directly provide information about the dominant non-radiative recombination mechanism. The signal traces follow the charge carrier concentrations, despite the complex origins of the signal, thus showing that ECPL can be applied to study charge carrier dynamics in semiconductors without requiring elaborate calculations. The model is compared with measurements on a reference sample with alternating layers of InGaAs/InAlAs that were additionally cross-checked with time resolved optical pump terahertz probe measurements and found to be in excellent agreement.« less

  2. Escaping the snare of chronological growth and launching a free curve alternative: general deviance as latent growth model.

    PubMed

    Wood, Phillip Karl; Jackson, Kristina M

    2013-08-01

    Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating "protective" or "launch" factors or as "developmental snares." These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of "general deviance" over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the "general deviance" model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of "general deviance" can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the "snares" alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control.

  3. Escaping the snare of chronological growth and launching a free curve alternative: General deviance as latent growth model

    PubMed Central

    WOOD, PHILLIP KARL; JACKSON, KRISTINA M.

    2014-01-01

    Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating “protective” or “launch” factors or as “developmental snares.” These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of “general deviance” over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the “general deviance” model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of “general deviance” can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the “snares” alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control. PMID:23880389

  4. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Gradient-based adaptation of general gaussian kernels.

    PubMed

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  6. Genetic parameters for female fertility, locomotion, body condition score, and linear type traits in Czech Holstein cattle.

    PubMed

    Zink, V; Štípková, M; Lassen, J

    2011-10-01

    The aim of this study was to estimate genetic parameters for fertility traits and linear type traits in the Czech Holstein dairy cattle population. Phenotypic data regarding 12 linear type traits, measured in first lactation, and 3 fertility traits, measured in each of first and second lactation, were collected from 2005 to 2009 in the progeny testing program of the Czech-Moravian Breeders Corporation. The number of animals for each linear type trait was 59,467, except for locomotion, where 53,436 animals were recorded. The 3-generation pedigree file included 164,125 animals. (Co)variance components were estimated using AI-REML in a series of bivariate analyses, which were implemented via the DMU package. Fertility traits included days from calving to first service (CF1), days open (DO1), and days from first to last service (FL1) in first lactation, and days from calving to first service (CF2), days open (DO2), and days from first to last service (FL2) in second lactation. The number of animals with fertility data varied between traits and ranged from 18,915 to 58,686. All heritability estimates for reproduction traits were low, ranging from 0.02 to 0.04. Heritability estimates for linear type traits ranged from 0.03 for locomotion to 0.39 for stature. Estimated genetic correlations between fertility traits and linear type traits were generally neutral or positive, whereas genetic correlations between body condition score and CF1, DO1, FL1, CF2 and DO2 were mostly negative, with the greatest correlation between BCS and CF2 (-0.51). Genetic correlations with locomotion were greatest for CF1 and CF2 (-0.34 for both). Results of this study show that cows that are genetically extreme for angularity, stature, and body depth tend to perform poorly for fertility traits. At the same time, cows that are genetically predisposed for low body condition score or high locomotion score are generally inferior in fertility. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Measuring Efficiency of Secondary Healthcare Providers in Slovenia

    PubMed Central

    Blatnik, Patricia; Bojnec, Štefan; Tušak, Matej

    2017-01-01

    Abstract The chief aim of this study was to analyze secondary healthcare providers' efficiency, focusing on the efficiency analysis of Slovene general hospitals. We intended to present a complete picture of technical, allocative, and cost or economic efficiency of general hospitals. Methods We researched the aspects of efficiency with two econometric methods. First, we calculated the necessary quotients of efficiency with the stochastic frontier analyze (SFA), which are realized by econometric evaluation of stochastic frontier functions; then, with the data envelopment analyze (DEA), we calculated the necessary quotients that are based on the linear programming method. Results Results on measures of efficiency showed that the two chosen methods produced two different conclusions. The SFA method concluded Celje General Hospital is the most efficient general hospital, whereas the DEA method concluded Brežice General Hospital was the hospital to be declared as the most efficient hospital. Conclusion Our results are a useful tool that can aid managers, payers, and designers of healthcare policy to better understand how general hospitals operate. The participants can accordingly decide with less difficulty on any further business operations of general hospitals, having the best practices of general hospitals at their disposal. PMID:28730180

  8. A study of the linear free energy model for DNA structures using the generalized Hamiltonian formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yavari, M., E-mail: yavari@iaukashan.ac.ir

    2016-06-15

    We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.

  9. Equivalence between a generalized dendritic network and a set of one-dimensional networks as a ground of linear dynamics.

    PubMed

    Koda, Shin-ichi

    2015-05-28

    It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.

  10. Static behavior and the effects of thermal cycling in hybrid laminates

    NASA Technical Reports Server (NTRS)

    Liber, T. M.; Daniel, I. M.; Chamis, C. C.

    1977-01-01

    Static stiffness, strength and ultimate strain after thermal cycling were investigated for graphite/Kevlar 49/epoxy and graphite/S-glass/epoxy angle-ply laminates. Tensile stress-strain curves to failure and uniaxial tensile properties were determined, and theoretical predictions of modulus, Poisson's ratio and ultimate strain, based on linear lamination theory, constituent ply properties and measured strength, were made. No significant influence on tensile stress properties due to stacking sequence variations was observed. In general, specimens containing two 0-degree Kevlar or S-glass plies were found to behave linearly to failure, while specimens containing 4 0-degree Kevlar or S-glass plies showed some nonlinear behavior.

  11. Cyclotron resonance in bilayer graphene.

    PubMed

    Henriksen, E A; Jiang, Z; Tung, L-C; Schwartz, M E; Takita, M; Wang, Y-J; Kim, P; Stormer, H L

    2008-02-29

    We present the first measurements of cyclotron resonance of electrons and holes in bilayer graphene. In magnetic fields up to B=18 T, we observe four distinct intraband transitions in both the conduction and valence bands. The transition energies are roughly linear in B between the lowest Landau levels, whereas they follow square root[B] for the higher transitions. This highly unusual behavior represents a change from a parabolic to a linear energy dispersion. The density of states derived from our data generally agrees with the existing lowest order tight binding calculation for bilayer graphene. However, in comparing data to theory, a single set of fitting parameters fails to describe the experimental results.

  12. Time reversibility of intracranial human EEG recordings in mesial temporal lobe epilepsy

    NASA Astrophysics Data System (ADS)

    van der Heyden, M. J.; Diks, C.; Pijn, J. P. M.; Velis, D. N.

    1996-02-01

    Intracranial electroencephalograms from patients suffering from mesial temporal lobe epilepsy were tested for time reversibility. If the recorded time series is irreversible, the input of the recording system cannot be a realisation of a linear Gaussian random process. We confirmed experimentally that the measurement equipment did not introduce irreversibility in the recorded output when the input was a realisation of a linear Gaussian random process. In general, the non-seizure recordings are reversible, whereas the seizure recordings are irreversible. These results suggest that time reversibility is a useful property for the characterisation of human intracranial EEG recordings in mesial temporal lobe epilepsy.

  13. Measuring multifractality of stock price fluctuation using multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Zhuang, Xin-tian; Jin, Xiu

    2009-06-01

    Analyzing the Shanghai stock price index daily returns using MF-DFA method, it is found that there are two different types of sources for multifractality in time series, namely, fat-tailed probability distributions and non-linear temporal correlations. Based on that, a sliding window of 240 frequency data in 5 trading days was used to study stock price index fluctuation. It is found that when the stock price index fluctuates sharply, a strong variability is clearly characterized by the generalized Hurst exponents h(q). Therefore, two measures, Δh and σ, based on generalized Hurst exponents were proposed to compare financial risks before and after Price Limits and Reform of Non-tradable Shares. The empirical results verify the validity of the measures, and this has led to a better understanding of complex stock markets.

  14. Emotional Reactions to Stress among Adolescent Boys and Girls: An Examination of the Mediating Mechanisms Proposed by General Strain Theory

    ERIC Educational Resources Information Center

    Sigfusdottir, Inga-Dora; Silver, Eric

    2009-01-01

    This study examines the effects of negative life events on anger and depressed mood among a sample of 7,758 Icelandic adolescents, measured as part of the National Survey of Icelandic Adolescents (Thorlindsson, Sigfusdottir, Bernburg, & Halldorsson, 1998). Using multiple linear regression and multinomial logit regression, we find that (a)…

  15. General Model of Photon-Pair Detection with an Image Sensor

    NASA Astrophysics Data System (ADS)

    Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.

    2018-05-01

    We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.

  16. Associations between immunological function and memory recall in healthy adults.

    PubMed

    Wang, Grace Y; Taylor, Tamasin; Sumich, Alexander; Merien, Fabrice; Borotkanics, Robert; Wrapson, Wendy; Krägeloh, Chris; Siegert, Richard J

    2017-12-01

    Studies in clinical and aging populations support associations between immunological function, cognition and mood, although these are not always in line with animal models. Moreover, very little is known about the relationship between immunological measures and cognition in healthy young adults. The present study tested associations between the state of immune system and memory recall in a group of relatively healthy adults. Immediate and delayed memory recall was assessed in 30 participants using the computerised cognitive battery. CD4, CD8 and CD69 subpopulations of lymphocytes, Interleukin-6 (IL-6) and cortisol were assessed with blood assays. Correlation analysis showed significant negative relationships between CD4 and the short and long delay memory measures. IL-6 showed a significant positive correlation with long-delay recall. Generalized linear models found associations between differences in all recall challenges and CD4. A multivariate generalized linear model including CD4 and IL-6 exhibited a stronger association. Results highlight the interactions between CD4 and IL-6 in relation to memory function. Further study is necessary to determine the underlying mechanisms of the associations between the state of immune system and cognitive performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  18. Right-Sizing Statistical Models for Longitudinal Data

    PubMed Central

    Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.

    2015-01-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  19. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  20. An exact noniterative linear method for locating sources based on measuring receiver arrival times.

    PubMed

    Militello, C; Buenafuente, S R

    2007-06-01

    In this paper an exact, linear solution to the source localization problem based on the time of arrival at the receivers is presented. The method is unique in that the source's position can be obtained by solving a system of linear equations, three for a plane and four for a volume. This simplification means adding an additional receiver to the minimum mathematically required (3+1 in two dimensions and 4+1 in three dimensions). The equations are easily worked out for any receiver configuration and their geometrical interpretation is straightforward. Unlike other methods, the system of reference used to describe the receivers' positions is completely arbitrary. The relationship between this method and previously published ones is discussed, showing how the present, more general, method overcomes nonlinearity and unknown dependency issues.

  1. Linear, multivariable robust control with a mu perspective

    NASA Technical Reports Server (NTRS)

    Packard, Andy; Doyle, John; Balas, Gary

    1993-01-01

    The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.

  2. Non-linear temperature-dependent curvature of a phase change composite bimorph beam

    NASA Astrophysics Data System (ADS)

    Blonder, Greg

    2017-06-01

    Bimorph films curl in response to temperature. The degree of curvature typically varies in proportion to the difference in thermal expansion of the individual layers, and linearly with temperature. In many applications, such as controlling a thermostat, this gentle linear behavior is acceptable. In other cases, such as opening or closing a valve or latching a deployable column into place, an abrupt motion at a fixed temperature is preferred. To achieve this non-linear motion, we describe the fabrication and performance of a new bilayer structure we call a ‘phase change composite bimorph (PCBM)’. In a PCBM, one layer in the bimorph is a composite containing small inclusions of phase change materials. When the inclusions melt, their large (generally positive and  >1%) expansion coefficient induces a strong, reversible step function jump in bimorph curvature. The measured jump amplitude and thermal response is consistent with theory, and can be harnessed by a new class of actuators and sensors.

  3. How quantitative measures unravel design principles in multi-stage phosphorylation cascades.

    PubMed

    Frey, Simone; Millat, Thomas; Hohmann, Stefan; Wolkenhauer, Olaf

    2008-09-07

    We investigate design principles of linear multi-stage phosphorylation cascades by using quantitative measures for signaling time, signal duration and signal amplitude. We compare alternative pathway structures by varying the number of phosphorylations and the length of the cascade. We show that a model for a weakly activated pathway does not reflect the biological context well, unless it is restricted to certain parameter combinations. Focusing therefore on a more general model, we compare alternative structures with respect to a multivariate optimization criterion. We test the hypothesis that the structure of a linear multi-stage phosphorylation cascade is the result of an optimization process aiming for a fast response, defined by the minimum of the product of signaling time and signal duration. It is then shown that certain pathway structures minimize this criterion. Several popular models of MAPK cascades form the basis of our study. These models represent different levels of approximation, which we compare and discuss with respect to the quantitative measures.

  4. Uncertainty Analysis of the Grazing Flow Impedance Tube

    NASA Technical Reports Server (NTRS)

    Brown, Martha C.; Jones, Michael G.; Watson, Willie R.

    2012-01-01

    This paper outlines a methodology to identify the measurement uncertainty of NASA Langley s Grazing Flow Impedance Tube (GFIT) over its operating range, and to identify the parameters that most significantly contribute to the acoustic impedance prediction. Two acoustic liners are used for this study. The first is a single-layer, perforate-over-honeycomb liner that is nonlinear with respect to sound pressure level. The second consists of a wire-mesh facesheet and a honeycomb core, and is linear with respect to sound pressure level. These liners allow for evaluation of the effects of measurement uncertainty on impedances educed with linear and nonlinear liners. In general, the measurement uncertainty is observed to be larger for the nonlinear liners, with the largest uncertainty occurring near anti-resonance. A sensitivity analysis of the aerodynamic parameters (Mach number, static temperature, and static pressure) used in the impedance eduction process is also conducted using a Monte-Carlo approach. This sensitivity analysis demonstrates that the impedance eduction process is virtually insensitive to each of these parameters.

  5. Corneal birefringence measured by spectrally resolved Mueller matrix ellipsometry and implications for non-invasive glucose monitoring

    PubMed Central

    Westphal, Peter; Kaltenbach, Johannes-Maria; Wicker, Kai

    2016-01-01

    A good understanding of the corneal birefringence properties is essential for polarimetric glucose monitoring in the aqueous humor of the eye. Therefore, we have measured complete 16-element Mueller matrices of single-pass transitions through nine porcine corneas in-vitro, spectrally resolved in the range 300…1000 nm. These ellipsometric measurements have been performed at several angles of incidence at the apex and partially at the periphery of the corneas. The Mueller matrices have been decomposed into linear birefringence, circular birefringence (i.e. optical rotation), depolarization, and diattenuation. We found considerable circular birefringence, strongly increasing with decreasing wavelength, for most corneas. Furthermore, the decomposition revealed significant dependence of the linear retardance (in nm) on the wavelength below 500 nm. These findings suggest that uniaxial and biaxial crystals are insufficient models for a general description of the corneal birefringence, especially in the blue and in the UV spectral range. The implications on spectral-polarimetric approaches for glucose monitoring in the eye (for diabetics) are discussed. PMID:27446644

  6. Transmission Measurement of the Third-Order Susceptibility of Gold

    NASA Technical Reports Server (NTRS)

    Smith, David D.; Yoon, Youngkwon; Boyd, Robert W.; Crooks, Richard M.; George, Michael

    1999-01-01

    Gold nanoparticle composites are known to display large optical nonlinearities. In order to assess the validity of generalized effective medium theories (EMT's) for describing the linear and nonlinear optical properties of metal nanoparticle composites, knowledge of the linear and nonlinear susceptibilities of the constituent materials is a prerequisite. In this study the inherent nonlinearity of the metal is measured directly (rather than deduced from a suitable EMT) using a very thin gold film. Specifically, we have used the z-scan technique at a wavelength near the transmission window of bulk gold to measure the third-order susceptibility of a continuous thin gold film deposited on a quartz substrate surface-modified with a self-assembled monolayer to promote adhesion and uniformity without affecting the optical properties. We compare our results with predictions which ascribe the nonlinear response to a Fermi-smearing mechanism. Further, we note that the sign of the nonlinear susceptibility is reversed from that of gold nanoparticle composites.

  7. Improvements in aircraft extraction programs

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.; Maine, R. E.

    1976-01-01

    Flight data from an F-8 Corsair and a Cessna 172 was analyzed to demonstrate specific improvements in the LRC parameter extraction computer program. The Cramer-Rao bounds were shown to provide a satisfactory relative measure of goodness of parameter estimates. It was not used as an absolute measure due to an inherent uncertainty within a multiplicative factor, traced in turn to the uncertainty in the noise bandwidth in the statistical theory of parameter estimation. The measure was also derived on an entirely nonstatistical basis, yielding thereby also an interpretation of the significance of off-diagonal terms in the dispersion matrix. The distinction between coefficients as linear and non-linear was shown to be important in its implication to a recommended order of parameter iteration. Techniques of improving convergence generally, were developed, and tested out on flight data. In particular, an easily implemented modification incorporating a gradient search was shown to improve initial estimates and thus remove a common cause for lack of convergence.

  8. Copula-based model for rainfall and El- Niño in Banyuwangi Indonesia

    NASA Astrophysics Data System (ADS)

    Caraka, R. E.; Supari; Tahmid, M.

    2018-04-01

    Modelling, describing and measuring the structure dependences between different random events is at the very heart of statistics. Therefore, a broad variety of varying dependence concepts has been developed in the past. Most often, practitioners rely only on the linear correlation to describe the degree of dependence between two or more variables; an approach that can lead to quite misleading conclusions as this measure is only capable of capturing linear relationships. Copulas go beyond dependence measures and provide a sound framework for general dependence modelling. This paper will introduce an application of Copula to estimate, understand, and interpret the dependence structure in a given set of data El-Niño in Banyuwangi, Indonesia. In a nutshell, we proved the flexibility of Copulas Archimedean in rainfall modelling and catching phenomena of El Niño in Banyuwangi, East Java, Indonesia. Also, it was found that SST of nino3, nino4, and nino3.4 are most appropriate ENSO indicators in identifying the relationship of El Nino and rainfall.

  9. Origins of R2∗ and white matter

    PubMed Central

    Rudko, David A.; Klassen, L. Martyn; de Chickera, Sonali N.; Gati, Joseph S.; Dekaban, Gregory A.; Menon, Ravi S.

    2014-01-01

    Estimates of the apparent transverse relaxation rate () can be used to quantify important properties of biological tissue. Surprisingly, the mechanism of dependence on tissue orientation is not well understood. The primary goal of this paper was to characterize orientation dependence of in gray and white matter and relate it to independent measurements of two other susceptibility based parameters: the local Larmor frequency shift (fL) and quantitative volume magnetic susceptibility (Δχ). Through this comparative analysis we calculated scaling relations quantifying (reversible contribution to the transverse relaxation rate from local field inhomogeneities) in a voxel given measurements of the local Larmor frequency shift. is a measure of both perturber geometry and density and is related to tissue microstructure. Additionally, two methods (the Generalized Lorentzian model and iterative dipole inversion) for calculating Δχ were compared in gray and white matter. The value of Δχ derived from fitting the Generalized Lorentzian model was then connected to the observed orientation dependence using image-registered optical density measurements from histochemical staining. Our results demonstrate that the and fL of white and cortical gray matter are well described by a sinusoidal dependence on the orientation of the tissue and a linear dependence on the volume fraction of myelin in the tissue. In deep brain gray matter structures, where there is no obvious symmetry axis, and fL have no orientation dependence but retain a linear dependence on tissue iron concentration and hence Δχ. PMID:24374633

  10. The use of artificial neural networks and multiple linear regression to predict rate of medical waste generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jahandideh, Sepideh; Jahandideh, Samad; Asadabadi, Ebrahim Barzegari

    2009-11-15

    Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R{sup 2} were used to evaluate performancemore » of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R{sup 2} confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future.« less

  11. Evaluation of a Nonlinear Finite Element Program - ABAQUS.

    DTIC Science & Technology

    1983-03-15

    anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has

  12. Interpreting the g loadings of intelligence test composite scores in light of Spearman's law of diminishing returns.

    PubMed

    Reynolds, Matthew R

    2013-03-01

    The linear loadings of intelligence test composite scores on a general factor (g) have been investigated recently in factor analytic studies. Spearman's law of diminishing returns (SLODR), however, implies that the g loadings of test scores likely decrease in magnitude as g increases, or they are nonlinear. The purpose of this study was to (a) investigate whether the g loadings of composite scores from the Differential Ability Scales (2nd ed.) (DAS-II, C. D. Elliott, 2007a, Differential Ability Scales (2nd ed.). San Antonio, TX: Pearson) were nonlinear and (b) if they were nonlinear, to compare them with linear g loadings to demonstrate how SLODR alters the interpretation of these loadings. Linear and nonlinear confirmatory factor analysis (CFA) models were used to model Nonverbal Reasoning, Verbal Ability, Visual Spatial Ability, Working Memory, and Processing Speed composite scores in four age groups (5-6, 7-8, 9-13, and 14-17) from the DAS-II norming sample. The nonlinear CFA models provided better fit to the data than did the linear models. In support of SLODR, estimates obtained from the nonlinear CFAs indicated that g loadings decreased as g level increased. The nonlinear portion for the nonverbal reasoning loading, however, was not statistically significant across the age groups. Knowledge of general ability level informs composite score interpretation because g is less likely to produce differences, or is measured less, in those scores at higher g levels. One implication is that it may be more important to examine the pattern of specific abilities at higher general ability levels. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. Linear discrete systems with memory: a generalization of the Langmuir model

    NASA Astrophysics Data System (ADS)

    Băleanu, Dumitru; Nigmatullin, Raoul R.

    2013-10-01

    In this manuscript we analyzed a general solution of the linear nonlocal Langmuir model within time scale calculus. Several generalizations of the Langmuir model are presented together with their exact corresponding solutions. The physical meaning of the proposed models are investigated and their corresponding geometries are reported.

  14. Relation between age and carotid artery intima-medial thickness: a systematic review.

    PubMed

    van den Munckhof, Inge C L; Jones, Helen; Hopman, Maria T E; de Graaf, Jacqueline; Nyakayiru, Jean; van Dijk, Bart; Eijsvogels, Thijs M H; Thijssen, Dick H J

    2018-05-12

    Carotid artery intima-medial thickness (cIMT) represents a popular measure of atherosclerosis and is predictive of future cardiovascular and cerebrovascular events. Although older age is associated with a higher cIMT, little is known about whether this increase in cIMT follows a linear relationship with age or it is affected under influence of cardiovascular diseases (CVD) or CVD risk factors. We hypothesize that the relationship between cIMT and age is nonlinear and is affected by CVD or risk factors. A systematic review of studies that examined cIMT in the general population and human populations free from CVD/risk factors was undertaken. The literature search was conducted in PubMed, Scopus, and Web of Science. Seventeen studies with 32 unique study populations, involving 10,124 healthy individuals free from CVD risk factors, were included. Furthermore, 58 studies with 115 unique study populations were included, involving 65,774 individuals from the general population (with and without CVD risk factors). A strong positive association was evident between age and cIMT in the healthy population, demonstrating a gradual, linear increase in cIMT that did not differ between age decades (r = 0.91, P < 0.001). Although populations with individuals with CVD demonstrated a higher cIMT compared to populations free of CVD, a linear relation between age and cIMT was also present in this population. Our data suggest that cIMT is strongly and linearly related to age. This linear relationship was not affected by CVD or risk factors. © 2018 Wiley Periodicals, Inc.

  15. Diminished autonomic neurocardiac function in patients with generalized anxiety disorder.

    PubMed

    Kim, Kyungwook; Lee, Seul; Kim, Jong-Hoon

    2016-01-01

    Generalized anxiety disorder (GAD) is a chronic and highly prevalent disorder that is characterized by a number of autonomic nervous system symptoms. The purpose of this study was to investigate the linear and nonlinear complexity measures of heart rate variability (HRV), measuring autonomic regulation, and to evaluate the relationship between HRV parameters and the severity of anxiety, in medication-free patients with GAD. Assessments of linear and nonlinear complexity measures of HRV were performed in 42 medication-free patients with GAD and 50 healthy control subjects. In addition, the severity of anxiety symptoms was assessed using the State-Trait Anxiety Inventory and Beck Anxiety Inventory. The values of the HRV measures of the groups were compared, and the correlations between the HRV measures and the severity of anxiety symptoms were assessed. The GAD group showed significantly lower standard deviation of RR intervals and the square root of the mean squared differences of successive normal sinus intervals values compared to the control group ( P <0.01). The approximate entropy value, which is a nonlinear complexity indicator, was also significantly lower in the patient group than in the control group ( P <0.01). In correlation analysis, there were no significant correlations between HRV parameters and the severity of anxiety symptoms. The present study indicates that GAD is significantly associated with reduced HRV, suggesting that autonomic neurocardiac integrity is substantially impaired in patients with GAD. Future prospective studies are required to investigate the effects of pharmacological or non-pharmacological treatment on neuroautonomic modulation in patients with GAD.

  16. Sufficient Dimension Reduction for Longitudinally Measured Predictors

    PubMed Central

    Pfeiffer, Ruth M.; Forzani, Liliana; Bura, Efstathia

    2013-01-01

    We propose a method to combine several predictors (markers) that are measured repeatedly over time into a composite marker score without assuming a model and only requiring a mild condition on the predictor distribution. Assuming that the first and second moments of the predictors can be decomposed into a time and a marker component via a Kronecker product structure, that accommodates the longitudinal nature of the predictors, we develop first moment sufficient dimension reduction techniques to replace the original markers with linear transformations that contain sufficient information for the regression of the predictors on the outcome. These linear combinations can then be combined into a score that has better predictive performance than the score built under a general model that ignores the longitudinal structure of the data. Our methods can be applied to either continuous or categorical outcome measures. In simulations we focus on binary outcomes and show that our method outperforms existing alternatives using the AUC, the area under the receiver-operator characteristics (ROC) curve, as a summary measure of the discriminatory ability of a single continuous diagnostic marker for binary disease outcomes. PMID:22161635

  17. An approximate generalized linear model with random effects for informative missing data.

    PubMed

    Follmann, D; Wu, M

    1995-03-01

    This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.

  18. Resultant as the determinant of a Koszul complex

    NASA Astrophysics Data System (ADS)

    Anokhina, A. S.; Morozov, A. Yu.; Shakirov, Sh. R.

    2009-09-01

    The determinant is a very important characteristic of a linear map between vector spaces. Two generalizations of linear maps are intensively used in modern theory: linear complexes (nilpotent chains of linear maps) and nonlinear maps. The determinant of a complex and the resultant are then the corresponding generalizations of the determinant of a linear map. It turns out that these two quantities are related: the resultant of a nonlinear map is the determinant of the corresponding Koszul complex. We give an elementary introduction into these notions and relations, which will definitely play a role in the future development of theoretical physics.

  19. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  20. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  1. Bisimulation equivalence of differential-algebraic systems

    NASA Astrophysics Data System (ADS)

    Megawati, Noorma Yulia; Schaft, Arjan van der

    2018-01-01

    In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.

  2. Emerging universe from scale invariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Campo, Sergio; Herrera, Ramón; Guendelman, Eduardo I.

    2010-06-01

    We consider a scale invariant model which includes a R{sup 2} term in action and show that a stable ''emerging universe'' scenario is possible. The model belongs to the general class of theories, where an integration measure independent of the metric is introduced. To implement scale invariance (S.I.), a dilaton field is introduced. The integration of the equations of motion associated with the new measure gives rise to the spontaneous symmetry breaking (S.S.B) of S.I. After S.S.B. of S.I. in the model with the R{sup 2} term (and first order formalism applied), it is found that a non trivial potentialmore » for the dilaton is generated. The dynamics of the scalar field becomes non linear and these non linearities are instrumental in the stability of some of the emerging universe solutions, which exists for a parameter range of the theory.« less

  3. Turbulent Stresses in LAPD and CSDX

    NASA Astrophysics Data System (ADS)

    Light, A. D.; Sechrest, Y.; Schaffner, D. A.; Muller, S. H.; Rossi, G. D.; Guice, D.; Carter, T. A.; Tynan, G. R.; Vincena, S.; Munsat, T.

    2011-10-01

    Turbulent momentum transport can affect phenomena as diverse as intrinsic rotation in self-organized systems, stellar dynamo, astrophysical accretion, and the mechanism of internal transport barriers in fusion devices. Contributions from turbulent fluctuations, in the form of Reynolds and Maxwell stress terms, have been predicted theoretically and observed in toroidal devices. In an effort to gain general insight into the physics, we present new results from turbulent stress measurements on two linear devices: the LArge Plasma Device (LAPD) at the University of California, Los Angeles, and the Controlled Shear De-correlation eXperiment (CSDX) at the University of California, San Diego. Both experiments are well-characterized linear machines in which the plasma beta can be varied. Electrostatic and magnetic fluctuations are measured over a range of plasma parameters in concert with fast imaging. Maxwell and Reynolds stresses are calculated from probe data and fluctuations are compared with fast camera images using velocimetry techniques.

  4. Assessing Validity of Measurement in Learning Disabilities Using Hierarchical Generalized Linear Modeling: The Roles of Anxiety and Motivation

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.

    2016-01-01

    The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…

  5. Solution of the Generalized Noah's Ark Problem.

    PubMed

    Billionnet, Alain

    2013-01-01

    The phylogenetic diversity (PD) of a set of species is a measure of the evolutionary distance among the species in the collection, based on a phylogenetic tree. Such a tree is composed of a root, internal nodes, and leaves that correspond to the set of taxa under study. With each edge of the tree is associated a non-negative branch length (evolutionary distance). If a particular survival probability is associated with each taxon, the PD measure becomes the expected PD measure. In the Noah's Ark Problem (NAP) introduced by Weitzman (1998), these survival probabilities can be increased at some cost. The problem is to determine how best to allocate a limited amount of resources to maximize the expected PD of the considered species. It is easy to formulate the NAP as a (difficult) nonlinear 0-1 programming problem. The aim of this article is to show that a general version of the NAP (GNAP) can be solved simply and efficiently with any set of edge weights and any set of survival probabilities by using standard mixed-integer linear programming software. The crucial point to move from a nonlinear program in binary variables to a mixed-integer linear program, is to approximate the logarithmic function by the lower envelope of a set of tangents to the curve. Solving the obtained mixed-integer linear program provides not only a near-optimal solution but also an upper bound on the value of the optimal solution. We also applied this approach to a generalization of the nature reserve problem (GNRP) that consists of selecting a set of regions to be conserved so that the expected PD of the set of species present in these regions is maximized. In this case, the survival probabilities of different taxa are not independent of each other. Computational results are presented to illustrate potentialities of the approach. Near-optimal solutions with hypothetical phylogenetic trees comprising about 4000 taxa are obtained in a few seconds or minutes of computing time for the GNAP, and in about 30 min for the GNRP. In all the cases the average guarantee varies from 0% to 1.20%.

  6. Using Linear and Quadratic Functions to Teach Number Patterns in Secondary School

    ERIC Educational Resources Information Center

    Kenan, Kok Xiao-Feng

    2017-01-01

    This paper outlines an approach to definitively find the general term in a number pattern, of either a linear or quadratic form, by using the general equation of a linear or quadratic function. This approach is governed by four principles: (1) identifying the position of the term (input) and the term itself (output); (2) recognising that each…

  7. Correntropy-based partial directed coherence for testing multivariate Granger causality in nonlinear processes

    NASA Astrophysics Data System (ADS)

    Kannan, Rohit; Tangirala, Arun K.

    2014-06-01

    Identification of directional influences in multivariate systems is of prime importance in several applications of engineering and sciences such as plant topology reconstruction, fault detection and diagnosis, and neurosciences. A spectrum of related directionality measures, ranging from linear measures such as partial directed coherence (PDC) to nonlinear measures such as transfer entropy, have emerged over the past two decades. The PDC-based technique is simple and effective, but being a linear directionality measure has limited applicability. On the other hand, transfer entropy, despite being a robust nonlinear measure, is computationally intensive and practically implementable only for bivariate processes. The objective of this work is to develop a nonlinear directionality measure, termed as KPDC, that possesses the simplicity of PDC but is still applicable to nonlinear processes. The technique is founded on a nonlinear measure called correntropy, a recently proposed generalized correlation measure. The proposed method is equivalent to constructing PDC in a kernel space where the PDC is estimated using a vector autoregressive model built on correntropy. A consistent estimator of the KPDC is developed and important theoretical results are established. A permutation scheme combined with the sequential Bonferroni procedure is proposed for testing hypothesis on absence of causality. It is demonstrated through several case studies that the proposed methodology effectively detects Granger causality in nonlinear processes.

  8. Parkes full polarization spectra of OH masers - II. Galactic longitudes 240° to 350°

    NASA Astrophysics Data System (ADS)

    Caswell, J. L.; Green, J. A.; Phillips, C. J.

    2014-04-01

    Full polarization measurements of 1665 and 1667 MHz OH masers at 261 sites of massive star formation have been made with the Parkes radio telescope. Here, we present the resulting spectra for 157 southern sources, complementing our previously published 104 northerly sources. For most sites, these are the first measurements of linear polarization, with good spectral resolution and complete velocity coverage. Our spectra exhibit the well-known predominance of highly circularly polarized features, interpreted as σ components of Zeeman patterns. Focusing on the generally weaker and rarer linear polarization, we found three examples of likely full Zeeman triplets (a linearly polarized π component, straddled in velocity by σ components), adding to the solitary example previously reported. We also identify 40 examples of likely isolated π components, contradicting past beliefs that π components might be extremely rare. These were recognized at 20 sites where a feature with high linear polarization on one transition is accompanied on the other transition by a matching feature, at the same velocity and also with significant linear polarization. Large velocity ranges are rare, but we find eight exceeding 25 km s-1, some of them indicating high-velocity blue-shifted outflows. Variability was investigated on time-scales of one year and over several decades. More than 20 sites (of 200) show high variability (intensity changes by factors of 4 or more) in some prominent features. Highly stable sites are extremely rare.

  9. General relativistic description of the observed galaxy power spectrum: Do we understand what we measure?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Jaiyul

    2010-10-15

    We extend the general relativistic description of galaxy clustering developed in Yoo, Fitzpatrick, and Zaldarriaga (2009). For the first time we provide a fully general relativistic description of the observed matter power spectrum and the observed galaxy power spectrum with the linear bias ansatz. It is significantly different from the standard Newtonian description on large scales and especially its measurements on large scales can be misinterpreted as the detection of the primordial non-Gaussianity even in the absence thereof. The key difference in the observed galaxy power spectrum arises from the real-space matter fluctuation defined as the matter fluctuation at themore » hypersurface of the observed redshift. As opposed to the standard description, the shape of the observed galaxy power spectrum evolves in redshift, providing additional cosmological information. While the systematic errors in the standard Newtonian description are negligible in the current galaxy surveys at low redshift, correct general relativistic description is essential for understanding the galaxy power spectrum measurements on large scales in future surveys with redshift depth z{>=}3. We discuss ways to improve the detection significance in the current galaxy surveys and comment on applications of our general relativistic formalism in future surveys.« less

  10. Quantum corrections to the generalized Proca theory via a matter field

    NASA Astrophysics Data System (ADS)

    Amado, André; Haghani, Zahra; Mohammadi, Azadeh; Shahidi, Shahab

    2017-09-01

    We study the quantum corrections to the generalized Proca theory via matter loops. We consider two types of interactions, linear and nonlinear in the vector field. Calculating the one-loop correction to the vector field propagator, three- and four-point functions, we show that the non-linear interactions are harmless, although they renormalize the theory. The linear matter-vector field interactions introduce ghost degrees of freedom to the generalized Proca theory. Treating the theory as an effective theory, we calculate the energy scale up to which the theory remains healthy.

  11. A General Accelerated Degradation Model Based on the Wiener Process.

    PubMed

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  12. A General Accelerated Degradation Model Based on the Wiener Process

    PubMed Central

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107

  13. A novel miniature in-line load-cell to measure in-situ tensile forces in the tibialis anterior tendon of rats

    PubMed Central

    Unger, Ewald; Bijak, Manfred; Stoiber, Martin; Lanmüller, Hermann; Jarvis, Jonathan Charles

    2017-01-01

    Direct measurements of muscular forces usually require a substantial rearrangement of the biomechanical system. To circumvent this problem, various indirect techniques have been used in the past. We introduce a novel direct method, using a lightweight (~0.5 g) miniature (3 x 3 x 7 mm) in-line load-cell to measure tension in the tibialis anterior tendon of rats. A linear motor was used to produce force-profiles to assess linearity, step-response, hysteresis and frequency behavior under controlled conditions. Sensor responses to a series of rectangular force-pulses correlated linearly (R2 = 0.999) within the range of 0–20 N. The maximal relative error at full scale (20 N) was 0.07% of the average measured signal. The standard deviation of the mean response to repeated 20 N force pulses was ± 0.04% of the mean response. The step-response of the load-cell showed the behavior of a PD2T2-element in control-engineering terminology. The maximal hysteretic error was 5.4% of the full-scale signal. Sinusoidal signals were attenuated maximally (-4 dB) at 200 Hz, within a measured range of 0.01–200 Hz. When measuring muscular forces this should be of minor concern as the fusion-frequency of muscles is generally much lower. The newly developed load-cell measured tensile forces of up to 20 N, without inelastic deformation of the sensor. It qualifies for various applications in which it is of interest directly to measure forces within a particular tendon causing only minimal disturbance to the biomechanical system. PMID:28934327

  14. A novel miniature in-line load-cell to measure in-situ tensile forces in the tibialis anterior tendon of rats.

    PubMed

    Schmoll, Martin; Unger, Ewald; Bijak, Manfred; Stoiber, Martin; Lanmüller, Hermann; Jarvis, Jonathan Charles

    2017-01-01

    Direct measurements of muscular forces usually require a substantial rearrangement of the biomechanical system. To circumvent this problem, various indirect techniques have been used in the past. We introduce a novel direct method, using a lightweight (~0.5 g) miniature (3 x 3 x 7 mm) in-line load-cell to measure tension in the tibialis anterior tendon of rats. A linear motor was used to produce force-profiles to assess linearity, step-response, hysteresis and frequency behavior under controlled conditions. Sensor responses to a series of rectangular force-pulses correlated linearly (R2 = 0.999) within the range of 0-20 N. The maximal relative error at full scale (20 N) was 0.07% of the average measured signal. The standard deviation of the mean response to repeated 20 N force pulses was ± 0.04% of the mean response. The step-response of the load-cell showed the behavior of a PD2T2-element in control-engineering terminology. The maximal hysteretic error was 5.4% of the full-scale signal. Sinusoidal signals were attenuated maximally (-4 dB) at 200 Hz, within a measured range of 0.01-200 Hz. When measuring muscular forces this should be of minor concern as the fusion-frequency of muscles is generally much lower. The newly developed load-cell measured tensile forces of up to 20 N, without inelastic deformation of the sensor. It qualifies for various applications in which it is of interest directly to measure forces within a particular tendon causing only minimal disturbance to the biomechanical system.

  15. Structure-function relationships using spectral-domain optical coherence tomography: comparison with scanning laser polarimetry.

    PubMed

    Aptel, Florent; Sayous, Romain; Fortoul, Vincent; Beccat, Sylvain; Denis, Philippe

    2010-12-01

    To evaluate and compare the regional relationships between visual field sensitivity and retinal nerve fiber layer (RNFL) thickness as measured by spectral-domain optical coherence tomography (OCT) and scanning laser polarimetry. Prospective cross-sectional study. One hundred and twenty eyes of 120 patients (40 with healthy eyes, 40 with suspected glaucoma, and 40 with glaucoma) were tested on Cirrus-OCT, GDx VCC, and standard automated perimetry. Raw data on RNFL thickness were extracted for 256 peripapillary sectors of 1.40625 degrees each for the OCT measurement ellipse and 64 peripapillary sectors of 5.625 degrees each for the GDx VCC measurement ellipse. Correlations between peripapillary RNFL thickness in 6 sectors and visual field sensitivity in the 6 corresponding areas were evaluated using linear and logarithmic regression analysis. Receiver operating curve areas were calculated for each instrument. With spectral-domain OCT, the correlations (r(2)) between RNFL thickness and visual field sensitivity ranged from 0.082 (nasal RNFL and corresponding visual field area, linear regression) to 0.726 (supratemporal RNFL and corresponding visual field area, logarithmic regression). By comparison, with GDx-VCC, the correlations ranged from 0.062 (temporal RNFL and corresponding visual field area, linear regression) to 0.362 (supratemporal RNFL and corresponding visual field area, logarithmic regression). In pairwise comparisons, these structure-function correlations were generally stronger with spectral-domain OCT than with GDx VCC and with logarithmic regression than with linear regression. The largest areas under the receiver operating curve were seen for OCT superior thickness (0.963 ± 0.022; P < .001) in eyes with glaucoma and for OCT average thickness (0.888 ± 0.072; P < .001) in eyes with suspected glaucoma. The structure-function relationship was significantly stronger with spectral-domain OCT than with scanning laser polarimetry, and was better expressed logarithmically than linearly. Measurements with these 2 instruments should not be considered to be interchangeable. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. A quasi-likelihood approach to non-negative matrix factorization

    PubMed Central

    Devarajan, Karthik; Cheung, Vincent C.K.

    2017-01-01

    A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pisenti, N.; Gaebler, C. P. E.; Lynn, T. W.

    Measuring an entangled state of two particles is crucial to many quantum communication protocols. Yet Bell-state distinguishability using a finite apparatus obeying linear evolution and local measurement is theoretically limited. We extend known bounds for Bell-state distinguishability in one and two variables to the general case of entanglement in n two-state variables. We show that at most 2{sup n+1}-1 classes out of 4{sup n} hyper-Bell states can be distinguished with one copy of the input state. With two copies, complete distinguishability is possible. We present optimal schemes in each case.

  18. Linear regression techniques for use in the EC tracer method of secondary organic aerosol estimation

    NASA Astrophysics Data System (ADS)

    Saylor, Rick D.; Edgerton, Eric S.; Hartsell, Benjamin E.

    A variety of linear regression techniques and simple slope estimators are evaluated for use in the elemental carbon (EC) tracer method of secondary organic carbon (OC) estimation. Linear regression techniques based on ordinary least squares are not suitable for situations where measurement uncertainties exist in both regressed variables. In the past, regression based on the method of Deming [1943. Statistical Adjustment of Data. Wiley, London] has been the preferred choice for EC tracer method parameter estimation. In agreement with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], we find that in the limited case where primary non-combustion OC (OC non-comb) is assumed to be zero, the ratio of averages (ROA) approach provides a stable and reliable estimate of the primary OC-EC ratio, (OC/EC) pri. In contrast with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], however, we find that the optimal use of Deming regression (and the more general York et al. [2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72, 367-375] regression) provides excellent results as well. For the more typical case where OC non-comb is allowed to obtain a non-zero value, we find that regression based on the method of York is the preferred choice for EC tracer method parameter estimation. In the York regression technique, detailed information on uncertainties in the measurement of OC and EC is used to improve the linear best fit to the given data. If only limited information is available on the relative uncertainties of OC and EC, then Deming regression should be used. On the other hand, use of ROA in the estimation of secondary OC, and thus the assumption of a zero OC non-comb value, generally leads to an overestimation of the contribution of secondary OC to total measured OC.

  19. Body mass index and waist circumference predict health-related quality of life, but not satisfaction with life, in the elderly.

    PubMed

    Wang, Lucy; Crawford, John D; Reppermund, Simone; Trollor, Julian; Campbell, Lesley; Baune, Bernhard T; Sachdev, Perminder; Brodaty, Henry; Samaras, Katherine; Smith, Evelyn

    2018-06-07

    While obesity has been linked with lower quality of life in the general adult population, the prospective effects of present obesity on future quality of life amongst the elderly is unclear. This article investigates the cross-sectional and longitudinal relationships between obesity and aspects of quality of life in community-dwelling older Australians. A 2-year longitudinal sample of community dwellers aged 70-90 years at baseline, derived from the Sydney Memory and Ageing Study (MAS), was chosen for the study. Of the 1037 participants in the original MAS sample, a baseline (Wave 1) sample of 926 and a 2-year follow-up (Wave 2) sample of 751 subjects were retained for these analyses. Adiposity was measured using body mass index (BMI) and waist circumference (WC). Quality of life was measured using the Assessment of Quality of Life (6 dimensions) questionnaire (AQoL-6D) as well as the Satisfaction with Life Scale (SWLS). Linear regression and analysis of covariance (ANCOVA) were used to examine linear and non-linear relationships between BMI and WC and measures of health-related quality of life (HRQoL) and satisfaction with life, adjusting for age, sex, education, asthma, osteoporosis, depression, hearing and visual impairment, mild cognitive impairment, physical activity, and general health. Where a non-linear relationship was found, established BMI or WC categories were used in ANCOVA. Greater adiposity was associated with lower HRQoL but not life satisfaction. Regression modelling in cross-sectional analyses showed that higher BMI and greater WC were associated with lower scores for independent living, relationships, and pain (i.e. worse pain) on the AQoL-6D. In planned contrasts within a series of univariate analyses, obese participants scored lower in independent living and relationships, compared to normal weight and overweight participants. Longitudinal analyses found that higher baseline BMI and WC were associated with lower independent living scores at Wave 2. Obesity is associated with and predicts lower quality of life in elderly adults aged 70-90 years, and the areas most affected are independent living, social relationships, and the experience of pain.

  20. Pilocarpine disposition and salivary flow responses following intravenous administration to dogs.

    PubMed

    Weaver, M L; Tanzer, J M; Kramer, P A

    1992-08-01

    Oral doses of pilocarpine increase salivary flow rates in patients afflicted with xerostomia (dry mouth). This study examined the pharmacokinetics of and a pharmacodynamic response (salivation) to intravenous pilocarpine nitrate administration in dogs. Disposition was linear over a dose range of 225-600 micrograms/kg; plasma concentrations were 10-120 micrograms/L. Elimination was rapid and generally biphasic, with a terminal elimination half-life of approximately 1.3 hr. The systemic clearance of pilocarpine was high (2.22 +/- 0.49 L/kg/hr) and its steady-state volume of distribution (2.30 +/- 0.64 L/kg) was comparable to that of many other basic drugs. All doses of pilocarpine induced measurable submaxillary and parotid salivary flow rates which could be maintained constant over time. Cumulative submaxillary saliva flow was linearly related to total pilocarpine dose. Plasma pilocarpine concentration was linearly related to both steady-state and postinfusion submaxillary salivary flow rates.

  1. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  2. Adiposity and Blood Pressure in 110 000 Mexican Adults

    PubMed Central

    Gnatiuc, Louisa; Halsey, Jim; Herrington, William G.; López-Cervantes, Malaquías; Lewington, Sarah; Collins, Rory; Tapia-Conyer, Roberto; Peto, Richard; Kuri-Morales, Pablo

    2017-01-01

    Previous studies have reached differing conclusions about the importance of general versus central markers of adiposity to blood pressure, leading to suggestions that population-specific adiposity thresholds may be needed. We examined the relevance of adiposity to blood pressure among 111 911 men and women who, when recruited into the Mexico City Prospective Study, were aged 35 to 89 years, had no chronic disease, and were not taking antihypertensives. Linear regression was used to estimate the effects on systolic and diastolic blood pressure of 2 markers of general adiposity (body mass index and height-adjusted weight) and 4 markers of central adiposity (waist circumference, hip circumference, waist:hip ratio, and waist:height ratio), adjusted for relevant confounders. Mean (SD) adiposity levels were: body mass index (28.7±4.5 kg/m2), height-adjusted weight (70.2±11.2 kg), waist circumference (93.3±10.6 cm), hip circumference (104.0±9.0 cm), waist:hip ratio (0.90±0.06), and waist:height ratio (0.60±0.07). Associations with blood pressure were linear with no threshold levels below which lower general or central adiposity was not associated with lower blood pressure. On average, each 1 SD higher measured adiposity marker was associated with a 3 mm Hg higher systolic blood pressure and 2 mm Hg higher diastolic blood pressure (SEs <0.1 mm Hg), but for the waist:hip ratio, associations were only approximately half as strong. General adiposity associations were independent of central adiposity, but central adiposity associations were substantially reduced by adjustment for general adiposity. Findings were similar for men and women. In Mexican adults, often overweight or obese, markers of general adiposity were stronger independent predictors of blood pressure than measured markers of central adiposity, with no threshold effects. PMID:28223471

  3. Helicons in uniform fields. I. Wave diagnostics with hodograms

    NASA Astrophysics Data System (ADS)

    Urrutia, J. M.; Stenzel, R. L.

    2018-03-01

    The wave equation for whistler waves is well known and has been solved in Cartesian and cylindrical coordinates, yielding plane waves and cylindrical waves. In space plasmas, waves are usually assumed to be plane waves; in small laboratory plasmas, they are often assumed to be cylindrical "helicon" eigenmodes. Experimental observations fall in between both models. Real waves are usually bounded and may rotate like helicons. Such helicons are studied experimentally in a large laboratory plasma which is essentially a uniform, unbounded plasma. The waves are excited by loop antennas whose properties determine the field rotation and transverse dimensions. Both m = 0 and m = 1 helicon modes are produced and analyzed by measuring the wave magnetic field in three dimensional space and time. From Ampère's law and Ohm's law, the current density and electric field vectors are obtained. Hodograms for these vectors are produced. The sign ambiguity of the hodogram normal with respect to the direction of wave propagation is demonstrated. In general, electric and magnetic hodograms differ but both together yield the wave vector direction unambiguously. Vector fields of the hodogram normal yield the phase flow including phase rotation for helicons. Some helicons can have locally a linear polarization which is identified by the hodogram ellipticity. Alternatively the amplitude oscillation in time yields a measure for the wave polarization. It is shown that wave interference produces linear polarization. These observations emphasize that single point hodogram measurements are inadequate to determine the wave topology unless assuming plane waves. Observations of linear polarization indicate wave packets but not plane waves. A simple qualitative diagnostics for the wave polarization is the measurement of the magnetic field magnitude in time. Circular polarization has a constant amplitude; linear polarization results in amplitude modulations.

  4. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    NASA Astrophysics Data System (ADS)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically highlight the consideration of conceptual model uncertainty.

  5. Follow the line: Mysterious bright streaks on Dione and Rhea

    NASA Astrophysics Data System (ADS)

    Martin, E. S.; Patthoff, D. A.

    2017-12-01

    Our recent mapping of the wispy terrains of Saturn's moons Dione and Rhea has revealed unique linear features that are generally long (10s-100s km), narrow (1-10 km), brighter than the surrounding terrains, and their detection may be sensitive to lighting geometries. We refer to these features as `linear virgae.' Wherever linear virgae are observed, they appear to crosscut all other structures, suggesting that they are the youngest features on these satellites. Despite their young age and wide distribution, linear virgae on Rhea and Dione have largely been overlooked in the literature. Linear virgae on Dione have previously been identified in Voyager and Cassini Data, but their formation remains an open question. If linear virgae are found to be endogenic, it would suggest that the surfaces of Dione and Rhea have been active recently. Alternatively, if linear virgae are exogenic it would suggest that the surfaces have been modified by a possibly common mechanism. Further work would be necessary to determine both a source of material and the dynamical environment that could produce these features. Here we present detailed morphometric measurements to further constrain whether linear virgae on Rhea and Dione share common origins. We complete an in-depth assessment of the lighting geometries where these features are visible. If linear virgae in the Saturnian system show common morphologies and distributions, a new, recently active, possibly system-wide mechanism may be revealed, thereby improving our understanding of the recent dynamical environment around Saturn.

  6. An Exposition on the Nonlinear Kinematics of Shells, Including Transverse Shearing Deformations

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.

    2013-01-01

    An in-depth exposition on the nonlinear deformations of shells with "small" initial geometric imperfections, is presented without the use of tensors. First, the mathematical descriptions of an undeformed-shell reference surface, and its deformed image, are given in general nonorthogonal coordinates. The two-dimensional Green-Lagrange strains of the reference surface derived and simplified for the case of "small" strains. Linearized reference-surface strains, rotations, curvatures, and torsions are then derived and used to obtain the "small" Green-Lagrange strains in terms of linear deformation measures. Next, the geometry of the deformed shell is described mathematically and the "small" three-dimensional Green-Lagrange strains are given. The deformations of the shell and its reference surface are related by introducing a kinematic hypothesis that includes transverse shearing deformations and contains the classical Love-Kirchhoff kinematic hypothesis as a proper, explicit subset. Lastly, summaries of the essential equations are given for general nonorthogonal and orthogonal coordinates, and the basis for further simplification of the equations is discussed.

  7. Streamflow record extension using power transformations and application to sediment transport

    NASA Astrophysics Data System (ADS)

    Moog, Douglas B.; Whiting, Peter J.; Thomas, Robert B.

    1999-01-01

    To obtain a representative set of flow rates for a stream, it is often desirable to fill in missing data or extend measurements to a longer time period by correlation to a nearby gage with a longer record. Linear least squares regression of the logarithms of the flows is a traditional and still common technique. However, its purpose is to generate optimal estimates of each day's discharge, rather than the population of discharges, for which it tends to underestimate variance. Maintenance-of-variance-extension (MOVE) equations [Hirsch, 1982] were developed to correct this bias. This study replaces the logarithmic transformation by the more general Box-Cox scaled power transformation, generating a more linear, constant-variance relationship for the MOVE extension. Combining the Box-Cox transformation with the MOVE extension is shown to improve accuracy in estimating order statistics of flow rate, particularly for the nonextreme discharges which generally govern cumulative transport over time. This advantage is illustrated by prediction of cumulative fractions of total bed load transport.

  8. Practical robustness measures in multivariable control system analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.

    1981-01-01

    The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.

  9. Evaluation of algorithms for geological thermal-inertia mapping

    NASA Technical Reports Server (NTRS)

    Miller, S. H.; Watson, K.

    1977-01-01

    The errors incurred in producing a thermal inertia map are of three general types: measurement, analysis, and model simplification. To emphasize the geophysical relevance of these errors, they were expressed in terms of uncertainty in thermal inertia and compared with the thermal inertia values of geologic materials. Thus the applications and practical limitations of the technique were illustrated. All errors were calculated using the parameter values appropriate to a site at the Raft River, Id. Although these error values serve to illustrate the magnitudes that can be expected from the three general types of errors, extrapolation to other sites should be done using parameter values particular to the area. Three surface temperature algorithms were evaluated: linear Fourier series, finite difference, and Laplace transform. In terms of resulting errors in thermal inertia, the Laplace transform method is the most accurate (260 TIU), the forward finite difference method is intermediate (300 TIU), and the linear Fourier series method the least accurate (460 TIU).

  10. Measuring the individual benefit of a medical or behavioral treatment using generalized linear mixed-effects models.

    PubMed

    Diaz, Francisco J

    2016-10-15

    We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Proof of the quantitative potential of immunofluorescence by mass spectrometry.

    PubMed

    Toki, Maria I; Cecchi, Fabiola; Hembrough, Todd; Syrigos, Konstantinos N; Rimm, David L

    2017-03-01

    Protein expression in formalin-fixed, paraffin-embedded patient tissue is routinely measured by Immunohistochemistry (IHC). However, IHC has been shown to be subject to variability in sensitivity, specificity and reproducibility, and is generally, at best, considered semi-quantitative. Mass spectrometry (MS) is considered by many to be the criterion standard for protein measurement, offering high sensitivity, specificity, and objective molecular quantification. Here, we seek to show that quantitative immunofluorescence (QIF) with standardization can achieve quantitative results comparable to MS. Epidermal growth factor receptor (EGFR) was measured by quantitative immunofluorescence in 15 cell lines with a wide range of EGFR expression, using different primary antibody concentrations, including the optimal signal-to-noise concentration after quantitative titration. QIF target measurement was then compared to the absolute EGFR concentration measured by Liquid Tissue-selected reaction monitoring mass spectrometry. The best agreement between the two assays was found when the EGFR primary antibody was used at the optimal signal-to-noise concentration, revealing a strong linear regression (R 2 =0.88). This demonstrates that quantitative optimization of titration by calculation of signal-to-noise ratio allows QIF to be standardized to MS and can therefore be used to assess absolute protein concentration in a linear and reproducible manner.

  12. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  13. The reliability and reproducibility of cephalometric measurements: a comparison of conventional and digital methods

    PubMed Central

    AlBarakati, SF; Kula, KS; Ghoneima, AA

    2012-01-01

    Objective The aim of this study was to assess the reliability and reproducibility of angular and linear measurements of conventional and digital cephalometric methods. Methods A total of 13 landmarks and 16 skeletal and dental parameters were defined and measured on pre-treatment cephalometric radiographs of 30 patients. The conventional and digital tracings and measurements were performed twice by the same examiner with a 6 week interval between measurements. The reliability within the method was determined using Pearson's correlation coefficient (r2). The reproducibility between methods was calculated by paired t-test. The level of statistical significance was set at p < 0.05. Results All measurements for each method were above 0.90 r2 (strong correlation) except maxillary length, which had a correlation of 0.82 for conventional tracing. Significant differences between the two methods were observed in most angular and linear measurements except for ANB angle (p = 0.5), angle of convexity (p = 0.09), anterior cranial base (p = 0.3) and the lower anterior facial height (p = 0.6). Conclusion In general, both methods of conventional and digital cephalometric analysis are highly reliable. Although the reproducibility of the two methods showed some statistically significant differences, most differences were not clinically significant. PMID:22184624

  14. On bipartite pure-state entanglement structure in terms of disentanglement

    NASA Astrophysics Data System (ADS)

    Herbut, Fedor

    2006-12-01

    Schrödinger's disentanglement [E. Schrödinger, Proc. Cambridge Philos. Soc. 31, 555 (1935)], i.e., remote state decomposition, as a physical way to study entanglement, is carried one step further with respect to previous work in investigating the qualitative side of entanglement in any bipartite state vector. Remote measurement (or, equivalently, remote orthogonal state decomposition) from previous work is generalized to remote linearly independent complete state decomposition both in the nonselective and the selective versions. The results are displayed in terms of commutative square diagrams, which show the power and beauty of the physical meaning of the (antiunitary) correlation operator inherent in the given bipartite state vector. This operator, together with the subsystem states (reduced density operators), constitutes the so-called correlated subsystem picture. It is the central part of the antilinear representation of a bipartite state vector, and it is a kind of core of its entanglement structure. The generalization of previously elaborated disentanglement expounded in this article is a synthesis of the antilinear representation of bipartite state vectors, which is reviewed, and the relevant results of [Cassinelli et al., J. Math. Anal. Appl. 210, 472 (1997)] in mathematical analysis, which are summed up. Linearly independent bases (finite or infinite) are shown to be almost as useful in some quantum mechanical studies as orthonormal ones. Finally, it is shown that linearly independent remote pure-state preparation carries the highest probability of occurrence. This singles out linearly independent remote influence from all possible ones.

  15. Langmuir-Probe Measurements in Flowing-Afterglow Plasmas

    NASA Technical Reports Server (NTRS)

    Johnsen, R.; Shunko, E. V.; Gougousi, T.; Golde, M. F.

    1994-01-01

    The validity of the orbital-motion theory for cylindrical Langmuir probes immersed in flowing- afterglow plasmas is investigated experimentally. It is found that the probe currents scale linearly with probe area only for electron-collecting but not for ion-collecting probes. In general, no agreement is found between the ion and electron densities derived from the probe currents. Measurements in recombining plasmas support the conclusion that only the electron densities derived from probe measurements can be trusted to be of acceptable accuracy. This paper also includes a brief derivation of the orbital-motion theory, a discussion of perturbations of the plasma by the probe current, and the interpretation of plasma velocities obtained from probe measurements.

  16. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  17. Linear versus non-linear measures of temporal variability in finger tapping and their relation to performance on open- versus closed-loop motor tasks: comparing standard deviations to Lyapunov exponents.

    PubMed

    Christman, Stephen D; Weaver, Ryan

    2008-05-01

    The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.

  18. Picosecond time-resolved measurements of dense plasma line shifts

    DOE PAGES

    Stillman, C. R.; Nilson, P. M.; Ivancic, S. T.; ...

    2017-06-13

    Picosecond time-resolved x-ray spectroscopy is used to measure the spectral line shift of the 1s2p–1s 2 transition in He-like Al ions as a function of the instantaneous plasma conditions. The plasma temperature and density are inferred from the Al He α complex using a nonlocal-thermodynamic-equilibrium atomic physics model. The experimental spectra show a linearly increasing red shift for electron densities of 1 to 5 × 10 23 cm –3. Furthermore, the measured line shifts are broadly consistent with a generalized analytic line-shift model based on calculations of a self-consistent field ion sphere model.

  19. Picosecond time-resolved measurements of dense plasma line shifts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stillman, C. R.; Nilson, P. M.; Ivancic, S. T.

    Picosecond time-resolved x-ray spectroscopy is used to measure the spectral line shift of the 1s2p–1s 2 transition in He-like Al ions as a function of the instantaneous plasma conditions. The plasma temperature and density are inferred from the Al He α complex using a nonlocal-thermodynamic-equilibrium atomic physics model. The experimental spectra show a linearly increasing red shift for electron densities of 1 to 5 × 10 23 cm –3. Furthermore, the measured line shifts are broadly consistent with a generalized analytic line-shift model based on calculations of a self-consistent field ion sphere model.

  20. A linear diode array (JFD-5) for match line in vivo dosimetry in photon and electron beams; evaluation for a chest wall irradiation technique.

    PubMed

    Essers, M; van Battum, L; Heijmen, B J

    2001-11-01

    In vivo dosimetry using thermoluminiscence detectors (TLD) is routinely performed in our institution to determine dose inhomogeneities in the match line region during chest wall irradiation. However, TLDs have some drawbacks: online in vivo dosimetry cannot be performed; generally, doses delivered by the contributing fields are not measured separately; measurement analysis is time consuming. To overcome these problems, the Joined Field Detector (JFD-5), a detector for match line in vivo dosimetry based on diodes, has been developed. This detector and its characteristics are presented. The JFD-5 is a linear array of 5 p-type diodes. The middle three diodes, used to measure the dose in the match line region, are positioned at 5-mm intervals. The outer two diodes, positioned at 3-cm distance from the central diode, are used to measure the dose in the two contributing fields. For three JFD-5 detectors, calibration factors for different energies, and sensitivity correction factors for non-standard field sizes, patient skin temperature, and oblique incidence have been determined. The accuracy of penumbra and match line dose measurements has been determined in phantom studies and in vivo. Calibration factors differ significantly between diodes and between photon and electron beams. However, conversion factors between energies can be applied. The correction factor for temperature is 0.35%/ degrees C, and for oblique incidence 2% at maximum. The penumbra measured with the JFD-5 agrees well with film and linear diode array measurements. JFD-5 in vivo match line dosimetry reproducibility was 2.0% (1 SD) while the agreement with TLD was 0.999+/-0.023 (1 SD). The JFD-5 can be used for accurate, reproducible, and fast on-line match line in vivo dosimetry.

  1. Experimental and analytical investigations of longitudinal combustion instability in a continuously variable resonance combustor (CVRC)

    NASA Astrophysics Data System (ADS)

    Yu, Yen Ching

    An analytical model based on linearized Euler equations (LEE) is developed and used in conjunction with a validating experiment to study combustion instability. The LEE model features mean flow effects, entropy waves, adaptability for more physically-realistic boundary conditions, and is generalized for multiple-domain conditions. The model calculates spatial modes, resonant frequencies and linear growth rates of the overall system. The predicted resonant frequencies and spatially-resolved mode shapes agree with the experimental data from a longitudinally-unstable model rocket combustor to within 7%. Different gaseous fuels (methane, ethylene, and hydrogen) were tested under fixed geometry. Tests with hydrogen were stable, whereas ethylene, methane, and JP-8 were increasingly unstable. A novel method for obtaining large amounts of stability data under variable resonance conditions in a single test was demonstrated. The continuously variable resonance combustor (CVRC) incorporates a traversing choked axial oxidizer inlet to vary the overall combustion system resonance. The CVRC experiment successfully demonstrates different level of instability, transitions between stability levels, and identifies the most stable and unstable geometric combination. Pressure oscillation amplitudes ranged from less than 10% of mean pressure to greater than 60%. At low amplitudes, measured resonant frequency changed with inlet location but at high amplitude the measured resonance frequency matched the frequency of the combustion chamber. As the system transitions from linear to non-linear instability, the higher harmonics of the fundamental resonant mode appear nearly simultaneously. Transient, high-amplitude, broadband noise, at lower frequencies (on the order of 200 Hz) are also observed. Conversely, as the system transitions back to a more linear stability regime, the higher harmonics disappear sequentially, led by the highest order. Good agreements between analytical and experimental results are attained by treating the experiment as quasi-stationary. The stability characteristics from the high frequency measurements are further analyzed using filtered pressure traces, spectrograms, power spectral density plots, and oscillation decrements. Future works recommended include: direct measurements, such as chemiluminescence or high-speed imaging to examine the unsteady combustion processes; three-way comparisons between the acoustic-based, linear Euler-based, and non-linear Euler/RANS model; use the high fidelity computation to investigate the forcing terms modeled in the acoustic-based model.

  2. Thermophysical Properties of 60-NITINOL for Mechanical Component Applications

    NASA Technical Reports Server (NTRS)

    Stanford, Malcolm K.

    2012-01-01

    The linear thermal expansion coefficient, specific heat capacity, electrical resistivity and thermal conductivity of 60- NITINOL were studied over a range of temperatures representing the operating environment of an oil-lubricated bearing. The behavior of this material appears to follow wellestablished theories applicable to either metal alloys, in general, or to intermetallic compounds, more specifically and the measured data were found to be comparable to those for conventional bearing alloys.

  3. Birth weight, current anthropometric markers, and high sensitivity C-reactive protein in Brazilian school children.

    PubMed

    Boscaini, Camile; Pellanda, Lucia Campos

    2015-01-01

    Studies have shown associations of birth weight with increased concentrations of high sensitivity C-reactive protein. This study assessed the relationship between birth weight, anthropometric and metabolic parameters during childhood, and high sensitivity C-reactive protein. A total of 612 Brazilian school children aged 5-13 years were included in the study. High sensitivity C-reactive protein was measured by particle-enhanced immunonephelometry. Nutritional status was assessed by body mass index, waist circumference, and skinfolds. Total cholesterol and fractions, triglycerides, and glucose were measured by enzymatic methods. Insulin sensitivity was determined by the homeostasis model assessment method. Statistical analysis included chi-square test, General Linear Model, and General Linear Model for Gamma Distribution. Body mass index, waist circumference, and skinfolds were directly associated with birth weight (P < 0.001, P = 0.001, and P = 0.015, resp.). Large for gestational age children showed higher high sensitivity C-reactive protein levels (P < 0.001) than small for gestational age. High birth weight is associated with higher levels of high sensitivity C-reactive protein, body mass index, waist circumference, and skinfolds. Large for gestational age altered high sensitivity C-reactive protein and promoted additional risk factor for atherosclerosis in these school children, independent of current nutritional status.

  4. Efficient Simultaneous Reconstruction of Time-Varying Images and Electrode Contact Impedances in Electrical Impedance Tomography.

    PubMed

    Boverman, Gregory; Isaacson, David; Newell, Jonathan C; Saulnier, Gary J; Kao, Tzu-Jen; Amm, Bruce C; Wang, Xin; Davenport, David M; Chong, David H; Sahni, Rakesh; Ashe, Jeffrey M

    2017-04-01

    In electrical impedance tomography (EIT), we apply patterns of currents on a set of electrodes at the external boundary of an object, measure the resulting potentials at the electrodes, and, given the aggregate dataset, reconstruct the complex conductivity and permittivity within the object. It is possible to maximize sensitivity to internal conductivity changes by simultaneously applying currents and measuring potentials on all electrodes but this approach also maximizes sensitivity to changes in impedance at the interface. We have, therefore, developed algorithms to assess contact impedance changes at the interface as well as to efficiently and simultaneously reconstruct internal conductivity/permittivity changes within the body. We use simple linear algebraic manipulations, the generalized singular value decomposition, and a dual-mesh finite-element-based framework to reconstruct images in real time. We are also able to efficiently compute the linearized reconstruction for a wide range of regularization parameters and to compute both the generalized cross-validation parameter as well as the L-curve, objective approaches to determining the optimal regularization parameter, in a similarly efficient manner. Results are shown using data from a normal subject and from a clinical intensive care unit patient, both acquired with the GE GENESIS prototype EIT system, demonstrating significantly reduced boundary artifacts due to electrode drift and motion artifact.

  5. Response of optically stimulated luminescence dosimeters subjected to X-rays in diagnostic energy range

    NASA Astrophysics Data System (ADS)

    Musa, Y.; Hashim, S.; Karim, M. K. A.; Bakar, K. A.; Ang, W. C.; Salehhon, N.

    2017-05-01

    The use of optically stimulated luminescence (OSL) for dosimetry applications has recently increased considerably due to availability of commercial OSL dosimeters (nanoDots) for clinical use. The OSL dosimeter has a great potential to be used in clinical dosimetry because of its prevailing advantages in both handling and application. However, utilising nanoDot OSLDs for dose measurement in diagnostic radiology can only be guaranteed when the performance and characteristics of the dosimeters are apposite. In the present work, we examined the response of commercially available nanoDot OSLD (Al2O3:C) subjected to X-rays in general radiography. The nanoDots response with respect to reproducibility, dose linearity and signal depletion were analysed using microStar reader (Landauer, Inc., Glenwood, IL). Irradiations were performed free-in-air using 70, 80 and 120 kV tube voltages and tube currents ranging from 10 - 100 mAs. The results showed that the nanoDots exhibit good linearity and reproducibility when subjected to diagnostic X-rays, with coefficient of variations (CV) ranging between 2.3% to 3.5% representing a good reproducibility. The results also indicated average of 1% signal reduction per readout. Hence, the nanoDots showed a promising potential for dose measurement in general X-ray procedure.

  6. How to characterize a nonlinear elastic material? A review on nonlinear constitutive parameters in isotropic finite elasticity

    PubMed Central

    2017-01-01

    The mechanical response of a homogeneous isotropic linearly elastic material can be fully characterized by two physical constants, the Young’s modulus and the Poisson’s ratio, which can be derived by simple tensile experiments. Any other linear elastic parameter can be obtained from these two constants. By contrast, the physical responses of nonlinear elastic materials are generally described by parameters which are scalar functions of the deformation, and their particular choice is not always clear. Here, we review in a unified theoretical framework several nonlinear constitutive parameters, including the stretch modulus, the shear modulus and the Poisson function, that are defined for homogeneous isotropic hyperelastic materials and are measurable under axial or shear experimental tests. These parameters represent changes in the material properties as the deformation progresses, and can be identified with their linear equivalent when the deformations are small. Universal relations between certain of these parameters are further established, and then used to quantify nonlinear elastic responses in several hyperelastic models for rubber, soft tissue and foams. The general parameters identified here can also be viewed as a flexible basis for coupling elastic responses in multi-scale processes, where an open challenge is the transfer of meaningful information between scales. PMID:29225507

  7. Linear response to long wavelength fluctuations using curvature simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldauf, Tobias; Zaldarriaga, Matias; Seljak, Uroš

    2016-09-01

    We study the local response to long wavelength fluctuations in cosmological N -body simulations, focusing on the matter and halo power spectra, halo abundance and non-linear transformations of the density field. The long wavelength mode is implemented using an effective curved cosmology and a mapping of time and distances. The method provides an alternative, more direct, way to measure the isotropic halo biases. Limiting ourselves to the linear case, we find generally good agreement between the biases obtained from the curvature method and the traditional power spectrum method at the level of a few percent. We also study the responsemore » of halo counts to changes in the variance of the field and find that the slope of the relation between the responses to density and variance differs from the naïve derivation assuming a universal mass function by approximately 8–20%. This has implications for measurements of the amplitude of local non-Gaussianity using scale dependent bias. We also analyze the halo power spectrum and halo-dark matter cross-spectrum response to long wavelength fluctuations and derive second order halo bias from it, as well as the super-sample variance contribution to the galaxy power spectrum covariance matrix.« less

  8. Polyethylene Naphthalate Scintillator: A Novel Detector for the Dosimetry of Radioactive Ophthalmic Applicators.

    PubMed

    Flühs, Dirk; Flühs, Andrea; Ebenau, Melanie; Eichmann, Marion

    2015-09-01

    Dosimetric measurements in small radiation fields with large gradients, such as eye plaque dosimetry with β or low-energy photon emitters, require dosimetrically almost water-equivalent detectors with volumes of <1 mm(3) and linear responses over several orders of magnitude. Polyvinyltoluene-based scintillators fulfil these conditions. Hence, they are a standard for such applications. However, they show disadvantages with regard to certain material properties and their dosimetric behaviour towards low-energy photons. Polyethylene naphthalate, recently recognized as a scintillator, offers chemical, physical and basic dosimetric properties superior to polyvinyltoluene. Its general applicability as a clinical dosimeter, however, has not been shown yet. To prove this applicability, extensive measurements at several clinical photon and electron radiation sources, ranging from ophthalmic plaques to a linear accelerator, were performed. For all radiation qualities under investigation, covering a wide range of dose rates, a linearity of the detector response to the dose was shown. Polyethylene naphthalate proved to be a suitable detector material for the dosimetry of ophthalmic plaques, including low-energy photon emitters and other small radiation fields. Due to superior properties, it has the potential to replace polyvinyltoluene as the standard scintillator for such applications.

  9. Evaluation of empirical rule of linearly correlated peptide selection (ERLPS) for proteotypic peptide-based quantitative proteomics.

    PubMed

    Liu, Kehui; Zhang, Jiyang; Fu, Bin; Xie, Hongwei; Wang, Yingchun; Qian, Xiaohong

    2014-07-01

    Precise protein quantification is essential in comparative proteomics. Currently, quantification bias is inevitable when using proteotypic peptide-based quantitative proteomics strategy for the differences in peptides measurability. To improve quantification accuracy, we proposed an "empirical rule for linearly correlated peptide selection (ERLPS)" in quantitative proteomics in our previous work. However, a systematic evaluation on general application of ERLPS in quantitative proteomics under diverse experimental conditions needs to be conducted. In this study, the practice workflow of ERLPS was explicitly illustrated; different experimental variables, such as, different MS systems, sample complexities, sample preparations, elution gradients, matrix effects, loading amounts, and other factors were comprehensively investigated to evaluate the applicability, reproducibility, and transferability of ERPLS. The results demonstrated that ERLPS was highly reproducible and transferable within appropriate loading amounts and linearly correlated response peptides should be selected for each specific experiment. ERLPS was used to proteome samples from yeast to mouse and human, and in quantitative methods from label-free to O18/O16-labeled and SILAC analysis, and enabled accurate measurements for all proteotypic peptide-based quantitative proteomics over a large dynamic range. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Generation of optical vortices with controllable topological charges and polarization patterns

    NASA Astrophysics Data System (ADS)

    Yang, Ching-Han; Fuh, Andy Ying-Guey

    2017-02-01

    We present a simple and flexible method of generating various vectorial vortex beams (VVBs) based on the scheme of double modulations from a single liquid crystal spatial light modulator (SLM). In this configuration, a half-wave plate (HWP) placed in front of the SLM is first used to control the weights of linear polarization components of incident light. Then, we respectively encode two orbital angular momentum (OAM) eigenstates displayed on each half of the SLM onto each of the linear components of light. This yields the generation of VVB fields spanned by a pair of linearly polarized OAM eigenstates. In order to convert polarization bases from the linear pair into another orthogonal pair, a quarter-wave plate (QWP) placed behind the SLM is used. This enables us to generate VVBs spanned by any pair of orthogonally polarized OAM eigenstates. Generally, the light states of polarization (SOP) can be presented as a geodesic path located on the plane perpendicular to the axis connecting the pair of bases used on the Poincaré sphere. The light property is adjustable depending on both slow axes of HWP and QWP, as well as via computer generated holograms. To validate generated beams, two measurement procedures are subsequently applied. First, Stokes polarimetry is used to measure the light SOP over the transverse plane. Next, a Shack-Hartmann wavefront sensor is used to measure the OAM charge. Both the simulated and experimental results are shown to be in a good qualitative agreement. In addition, both polarization patterns and OAM charges can be controlled independently using the proposed method.

  11. Exposure-lag-response in Longitudinal Studies: Application of Distributed Lag Non-linear Models in an Occupational Cohort.

    PubMed

    Neophytou, Andreas M; Picciotto, Sally; Brown, Daniel M; Gallagher, Lisa E; Checkoway, Harvey; Eisen, Ellen A; Costello, Sadie

    2018-02-13

    Prolonged exposures can have complex relationships with health outcomes, as timing, duration, and intensity of exposure are all potentially relevant. Summary measures such as cumulative exposure or average intensity of exposure may not fully capture these relationships. We applied penalized and unpenalized distributed lag non-linear models (DLNMs) with flexible exposure-response and lag-response functions in order to examine the association between crystalline silica exposure and mortality from lung cancer and non-malignant respiratory disease in a cohort study of 2,342 California diatomaceous earth workers, followed 1942-2011. We also assessed associations using simple measures of cumulative exposure assuming linear exposure-response and constant lag-response. Measures of association from DLNMs were generally higher than from simpler models. Rate ratios from penalized DLNMs corresponding to average daily exposures of 0.4 mg/m3 during lag years 31-50 prior to the age of observed cases were 1.47 (95% confidence interval (CI) 0.92, 2.35) for lung cancer and 1.80 (95% CI: 1.14, 2.85) for non-malignant respiratory disease. Rate ratios from the simpler models for the same exposure scenario were 1.15 (95% CI: 0.89-1.48) and 1.23 (95% CI: 1.03-1.46) respectively. Longitudinal cohort studies of prolonged exposures and chronic health outcomes should explore methods allowing for flexibility and non-linearities in the exposure-lag-response. © The Author(s) 2018. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  12. Observation Impacts for Longer Forecast Lead-Times

    NASA Astrophysics Data System (ADS)

    Mahajan, R.; Gelaro, R.; Todling, R.

    2013-12-01

    Observation impact on forecasts evaluated using adjoint-based techniques (e.g. Langland and Baker, 2004) are limited by the validity of the assumptions underlying the forecasting model adjoint. Most applications of this approach have focused on deriving observation impacts on short-range forecasts (e.g. 24-hour) in part to stay well within linearization assumptions. The most widely used measure of observation impact relies on the availability of the analysis for verifying the forecasts. As pointed out by Gelaro et al. (2007), and more recently by Todling (2013), this introduces undesirable correlations in the measure that are likely to affect the resulting assessment of the observing system. Stappers and Barkmeijer (2012) introduced a technique that, in principle, allows extending the validity of tangent linear and corresponding adjoint models to longer lead-times, thereby reducing the correlations in the measures used for observation impact assessments. The methodology provides the means to better represent linearized models by making use of Gaussian quadrature relations to handle various underlying non-linear model trajectories. The formulation is exact for particular bi-linear dynamics; it corresponds to an approximation for general-type nonlinearities and must be tested for large atmospheric models. The present work investigates the approach of Stappers and Barkmeijer (2012)in the context of NASA's Goddard Earth Observing System Version 5 (GEOS-5) atmospheric data assimilation system (ADAS). The goal is to calculate observation impacts in the GEOS-5 ADAS for forecast lead-times of at least 48 hours in order to reduce the potential for undesirable correlations that occur at shorter forecast lead times. References [1]Langland, R. H., and N. L. Baker, 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189-201. [2] Gelaro, R., Y. Zhu, and R. M. Errico, 2007: Examination of various-order adjoint-based approximations of observation impact. Meteoroloische Zeitschrift, 16, 685-692. [3]Stappers, R. J. J., and J. Barkmeijer, 2012: Optimal linearization trajectories for tangent linear models. Q. J. R. Meteorol. Soc., 138, 170-184. [4] Todling, R. 2013: Comparing two approaches for assessing observation impact. Mon. Wea. Rev., 141, 1484-1505.

  13. Linear-time reconstruction of zero-recombinant Mendelian inheritance on pedigrees without mating loops.

    PubMed

    Liu, Lan; Jiang, Tao

    2007-01-01

    With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the linear equations.

  14. Generalized statistical mechanics of cosmic rays: Application to positron-electron spectral indices.

    PubMed

    Yalcin, G Cigdem; Beck, Christian

    2018-01-29

    Cosmic ray energy spectra exhibit power law distributions over many orders of magnitude that are very well described by the predictions of q-generalized statistical mechanics, based on a q-generalized Hagedorn theory for transverse momentum spectra and hard QCD scattering processes. QCD at largest center of mass energies predicts the entropic index to be [Formula: see text]. Here we show that the escort duality of the nonextensive thermodynamic formalism predicts an energy split of effective temperature given by Δ [Formula: see text] MeV, where T H is the Hagedorn temperature. We carefully analyse the measured data of the AMS-02 collaboration and provide evidence that the predicted temperature split is indeed observed, leading to a different energy dependence of the e + and e - spectral indices. We also observe a distinguished energy scale E *  ≈ 50 GeV where the e + and e - spectral indices differ the most. Linear combinations of the escort and non-escort q-generalized canonical distributions yield excellent agreement with the measured AMS-02 data in the entire energy range.

  15. Obesity and Insulin Resistance Screening Tools in American Adolescents: National Health and Nutrition Examination Survey (NHANES) 1999 to 2010.

    PubMed

    Lee, Joey A; Laurson, Kelly R

    2016-08-01

    To identify which feasible obesity and insulin resistance (IR) screening tools are most strongly associated in adolescents by using a nationally representative sample. Adolescents aged 12.0 to 18.9 years who were participating in the National Health and Nutrition Examination Survey (NHANES) (n=3584) and who were measured for height, weight, waist circumference (WC), triceps and subscapular skinfold thickness, glycated hemoglobin, fasting glucose (FG) and fasting insulin (FI) level were included. Adolescents were split by gender and grouped by body mass index (BMI) percentile. Age- and gender-specific classifications were constructed for each obesity screening tool measure to account for growth and maturation. General linear models were used to establish groups objectively for analysis based on when IR began to increase. Additional general linear models were used to identify when IR significantly increased for each IR measure as obesity group increased and to identify the variance accounted for among each obesity-IR screening tool relationship. As the obesity group increased, homeostasis model assessment-insulin resistance (HOMA-IR) and FI significantly increased, while FG increased only (above the referent) in groups with BMI percentiles ≥95.0, and glycated hemoglobin level did not vary across obesity groups. The most strongly associated screening tools were WC and FI in boys (R(2)=0.253) and girls (R(2)=0.257). FI had the strongest association with all of the obesity measures. BMI associations were slightly weaker than WC in each in relation to IR. Our findings show that WC and FI are the most strongly associated obesity and IR screening tool measures in adolescents. These feasible screening tools should be utilized in screening practices for at-risk adolescents. Copyright © 2015 Canadian Diabetes Association. Published by Elsevier Inc. All rights reserved.

  16. Multipole analysis in the radiation field for linearized f (R ) gravity with irreducible Cartesian tensors

    NASA Astrophysics Data System (ADS)

    Wu, Bofeng; Huang, Chao-Guang

    2018-04-01

    The 1 /r expansion in the distance to the source is applied to the linearized f (R ) gravity, and its multipole expansion in the radiation field with irreducible Cartesian tensors is presented. Then, the energy, momentum, and angular momentum in the gravitational waves are provided for linearized f (R ) gravity. All of these results have two parts, which are associated with the tensor part and the scalar part in the multipole expansion of linearized f (R ) gravity, respectively. The former is the same as that in General Relativity, and the latter, as the correction to the result in General Relativity, is caused by the massive scalar degree of freedom and plays an important role in distinguishing General Relativity and f (R ) gravity.

  17. Analyzing linear spatial features in ecology.

    PubMed

    Buettel, Jessie C; Cole, Andrew; Dickey, John M; Brook, Barry W

    2018-06-01

    The spatial analysis of dimensionless points (e.g., tree locations on a plot map) is common in ecology, for instance using point-process statistics to detect and compare patterns. However, the treatment of one-dimensional linear features (fiber processes) is rarely attempted. Here we appropriate the methods of vector sums and dot products, used regularly in fields like astrophysics, to analyze a data set of mapped linear features (logs) measured in 12 × 1-ha forest plots. For this demonstrative case study, we ask two deceptively simple questions: do trees tend to fall downhill, and if so, does slope gradient matter? Despite noisy data and many potential confounders, we show clearly that topography (slope direction and steepness) of forest plots does matter to treefall. More generally, these results underscore the value of mathematical methods of physics to problems in the spatial analysis of linear features, and the opportunities that interdisciplinary collaboration provides. This work provides scope for a variety of future ecological analyzes of fiber processes in space. © 2018 by the Ecological Society of America.

  18. Control method for physical systems and devices

    DOEpatents

    Guckenheimer, John

    1997-01-01

    A control method for stabilizing systems or devices that are outside the control domain of a linear controller is provided. When applied to nonlinear systems, the effectiveness of this method depends upon the size of the domain of stability that is produced for the stabilized equilibrium. If this domain is small compared to the accuracy of measurements or the size of disturbances within the system, then the linear controller is likely to fail within a short period. Failure of the system or device can be catastrophic: the system or device can wander far from the desired equilibrium. The method of the invention presents a general procedure to recapture the stability of a linear controller, when the trajectory of a system or device leaves its region of stability. By using a hybrid strategy based upon discrete switching events within the state space of the system or device, the system or device will return from a much larger domain to the region of stability utilized by the linear controller. The control procedure is robust and remains effective under large classes of perturbations of a given underlying system or device.

  19. Immittance Data Validation by Kramers‐Kronig Relations – Derivation and Implications

    PubMed Central

    2017-01-01

    Abstract Explicitly based on causality, linearity (superposition) and stability (time invariance) and implicit on continuity (consistency), finiteness (convergence) and uniqueness (single valuedness) in the time domain, Kramers‐Kronig (KK) integral transform (KKT) relations for immittances are derived as pure mathematical constructs in the complex frequency domain using the two‐sided (bilateral) Laplace integral transform (LT) reduced to the Fourier domain for sufficiently rapid exponential decaying, bounded immittances. Novel anti KK relations are also derived to distinguish LTI (linear, time invariant) systems from non‐linear, unstable and acausal systems. All relations can be used to test KK transformability on the LTI principles of linearity, stability and causality of measured and model data by Fourier transform (FT) in immittance spectroscopy (IS). Also, integral transform relations are provided to estimate (conjugate) immittances at zero and infinite frequency particularly useful to normalise data and compare data. Also, important implications for IS are presented and suggestions for consistent data analysis are made which generally apply likewise to complex valued quantities in many fields of engineering and natural sciences. PMID:29577007

  20. [Evaluation of pendulum testing of spasticity].

    PubMed

    Le Cavorzin, P; Hernot, X; Bartier, O; Carrault, G; Chagneau, F; Gallien, P; Allain, H; Rochcongar, P

    2002-11-01

    To identify valid measurements of spasticity derived from the pendulum test of the leg in a representative population of spastic patients. Pendulum testing was performed in 15 spastic and 10 matched healthy subjects. The reflex-mediated torque evoked in quadriceps femoris, as well as muscle mechanical parameters (viscosity and elasticity), were calculated using mathematical modelling. Correlation with the two main measures derived from the pendulum test reported in the literature (the Relaxation Index and the area under the curve) was calculated in order to select the most valid. Among mechanical parameters, only viscosity was found to be significantly higher in the spastic group. As expected, the computed integral of the reflex-mediated torque was found to be larger in spastics than in healthy subjects. A significant non-linear (logarithmic) correlation was found between the clinically-assessed muscle spasticity (Ashworth grading) and the computed reflex-mediated torque, emphasising the non-linear behaviour of this scale. Among measurements derived from the pendulum test which are proposed in the literature for routine estimation of spasticity, the Relaxation Index exhibited an unsuitable U-shaped pattern of variation with increasing reflex-mediated torque. On the opposite, the area under the curve revealed a linear regression, which is more convenient for routine estimation of spasticity. The pendulum test of the leg is a simple technique for the assessment of spastic hypertonia. However, the measurement generally used in the literature (the Relaxation Index) exhibits serious limitations, and would benefit to be replaced by more valid measures, such as the area under the goniometric curve, especially for the assessment of therapeutics.

  1. A test of a linear model of glaucomatous structure-function loss reveals sources of variability in retinal nerve fiber and visual field measurements.

    PubMed

    Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H

    2009-09-01

    Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.

  2. Generalized Bezout's Theorem and its applications in coding theory

    NASA Technical Reports Server (NTRS)

    Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.

  3. Failed reciprocity in close social relationships and health: findings from the Whitehall II study.

    PubMed

    Chandola, Tarani; Marmot, Michael; Siegrist, Johannes

    2007-10-01

    To extend the model of effort-reward imbalance at work to close and more general social relationships and test the associations with different measures of health. Lack of reciprocity at work is associated with poorer health in a number of studies. However, few studies have analysed the effect of nonreciprocity in other kinds of social relationships on health. The Whitehall II Study is an ongoing prospective study of British civil servants (n=10308 at baseline in 1985-88). Cross-sectional data from the latest phase (7, n=6944 in 2002-04) were used in the analyses. The main exposure was a questionnaire measuring nonreciprocal social relations in partnership, parent-children, and general trusting relationships. Health measures included the SF-36 mental and physical component scores, General Health Questionnaire-30 depression subscale, Jenkins' Sleep disturbance questionnaire, and the Rose Angina questionnaire. Logistic and linear regression models were analysed, adjusted for potential confounders, and mediators of the association. Lack of reciprocity is associated with all measures of poorer health. This association attenuates after adjustment for previous health and additional confounders and mediators but remains significant in a majority of models. Negative social support from a close person is independently associated with reduced health, but adjusting for this effect does not eliminate the association of nonreciprocity with poor health. The effort-reward imbalance at work model has been extended to close and more general social relationships. Lack of reciprocity in partnership, parent-children and general trusting relationships is associated with poorer health.

  4. Comparison of co-expression measures: mutual information, correlation, and model based indices.

    PubMed

    Song, Lin; Langfelder, Peter; Horvath, Steve

    2012-12-09

    Co-expression measures are often used to define networks among genes. Mutual information (MI) is often used as a generalized correlation measure. It is not clear how much MI adds beyond standard (robust) correlation measures or regression model based association measures. Further, it is important to assess what transformations of these and other co-expression measures lead to biologically meaningful modules (clusters of genes). We provide a comprehensive comparison between mutual information and several correlation measures in 8 empirical data sets and in simulations. We also study different approaches for transforming an adjacency matrix, e.g. using the topological overlap measure. Overall, we confirm close relationships between MI and correlation in all data sets which reflects the fact that most gene pairs satisfy linear or monotonic relationships. We discuss rare situations when the two measures disagree. We also compare correlation and MI based approaches when it comes to defining co-expression network modules. We show that a robust measure of correlation (the biweight midcorrelation transformed via the topological overlap transformation) leads to modules that are superior to MI based modules and maximal information coefficient (MIC) based modules in terms of gene ontology enrichment. We present a function that relates correlation to mutual information which can be used to approximate the mutual information from the corresponding correlation coefficient. We propose the use of polynomial or spline regression models as an alternative to MI for capturing non-linear relationships between quantitative variables. The biweight midcorrelation outperforms MI in terms of elucidating gene pairwise relationships. Coupled with the topological overlap matrix transformation, it often leads to more significantly enriched co-expression modules. Spline and polynomial networks form attractive alternatives to MI in case of non-linear relationships. Our results indicate that MI networks can safely be replaced by correlation networks when it comes to measuring co-expression relationships in stationary data.

  5. An interlaboratory comparison study on the measurement of elements in PM10

    NASA Astrophysics Data System (ADS)

    Yatkin, Sinan; Belis, Claudio A.; Gerboles, Michel; Calzolai, Giulia; Lucarelli, Franco; Cavalli, Fabrizia; Trzepla, Krystyna

    2016-01-01

    An inter-laboratory comparison study was conducted to measure elemental loadings on PM10 samples, collected in Ispra, a regional background/rural site in Italy, using three different XRF (X-ray Fluorescence) methods, namely Epsilon 5 by linear calibration, Quant'X by the standardless analysis, and PIXE (Particle Induced X-ray Emission) with linear calibration. A subset of samples was also analyzed by ICP-MS (Inductively Coupled Plasma-Mass Spectrometry). Several metrics including method detection limits (MDLs), precision, bias from a NIST standard reference material (SRM 2783) quoted values, relative absolute difference, orthogonal regression and the ratio of the absolute difference between the methods to claimed uncertainty were used to compare the laboratories. The MDLs were found to be comparable for many elements. Precision estimates were less than 10% for the majority of the elements. Absolute biases from SRM 2783 remained less than 20% for the majority of certified elements. The regression results of PM10 samples showed that the three XRF laboratories measured very similar mass loadings for S, K, Ti, Mn, Fe, Cu, Br, Sr and Pb with slopes within 20% of unity. The ICP-MS results confirmed the agreement and discrepancies between XRF laboratories for Al, K, Ca, Ti, V, Cu, Sr and Pb. The ICP-MS results are inconsistent with the XRF laboratories for Fe and Zn. The absolute differences between the XRF laboratories generally remained within their claimed uncertainties, showing a pattern generally consistent with the orthogonal regression results.

  6. Tracking Electroencephalographic Changes Using Distributions of Linear Models: Application to Propofol-Based Depth of Anesthesia Monitoring.

    PubMed

    Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J

    2017-04-01

    Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.

  7. Split diversity in constrained conservation prioritization using integer linear programming.

    PubMed

    Chernomor, Olga; Minh, Bui Quang; Forest, Félix; Klaere, Steffen; Ingram, Travis; Henzinger, Monika; von Haeseler, Arndt

    2015-01-01

    Phylogenetic diversity (PD) is a measure of biodiversity based on the evolutionary history of species. Here, we discuss several optimization problems related to the use of PD, and the more general measure split diversity (SD), in conservation prioritization.Depending on the conservation goal and the information available about species, one can construct optimization routines that incorporate various conservation constraints. We demonstrate how this information can be used to select sets of species for conservation action. Specifically, we discuss the use of species' geographic distributions, the choice of candidates under economic pressure, and the use of predator-prey interactions between the species in a community to define viability constraints.Despite such optimization problems falling into the area of NP hard problems, it is possible to solve them in a reasonable amount of time using integer programming. We apply integer linear programming to a variety of models for conservation prioritization that incorporate the SD measure.We exemplarily show the results for two data sets: the Cape region of South Africa and a Caribbean coral reef community. Finally, we provide user-friendly software at http://www.cibiv.at/software/pda.

  8. Digital Biomass Accumulation Using High-Throughput Plant Phenotype Data Analysis.

    PubMed

    Rahaman, Md Matiur; Ahsan, Md Asif; Gillani, Zeeshan; Chen, Ming

    2017-09-01

    Biomass is an important phenotypic trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive, and they require numerous individuals to be cultivated for repeated measurements. With the advent of image-based high-throughput plant phenotyping facilities, non-destructive biomass measuring methods have attempted to overcome this problem. Thus, the estimation of plant biomass of individual plants from their digital images is becoming more important. In this paper, we propose an approach to biomass estimation based on image derived phenotypic traits. Several image-based biomass studies state that the estimation of plant biomass is only a linear function of the projected plant area in images. However, we modeled the plant volume as a function of plant area, plant compactness, and plant age to generalize the linear biomass model. The obtained results confirm the proposed model and can explain most of the observed variance during image-derived biomass estimation. Moreover, a small difference was observed between actual and estimated digital biomass, which indicates that our proposed approach can be used to estimate digital biomass accurately.

  9. Frequency domain photothermoacoustic signal amplitude dependence on the optical properties of water: turbid polyvinyl chloride-plastisol system.

    PubMed

    Spirou, Gloria M; Mandelis, Andreas; Vitkin, I Alex; Whelan, William M

    2008-05-10

    Photoacoustic (more precisely, photothermoacoustic) signals generated by the absorption of photons can be related to the incident laser fluence rate. The dependence of frequency domain photoacoustic (FD-PA) signals on the optical absorption coefficient (micro(a)) and the effective attenuation coefficient (micro(eff)) of a turbid medium [polyvinyl chloride-plastisol (PVCP)] with tissuelike optical properties was measured, and empirical relationships between these optical properties and the photoacoustic (PA) signal amplitude and the laser fluence rate were derived for the water (PVCP system with and without optical scatterers). The measured relationships between these sample optical properties and the PA signal amplitude were found to be linear, consistent with FD-PA theory: micro(a)=a(A/Phi)-b and micro(eff)=c(A/Phi)+d, where Phi is the laser fluence, A is the FD-PA amplitude, and a, ...,d are empirical coefficients determined from the experiment using linear frequency-swept modulation and a lock-in heterodyne detection technique. This quantitative technique can easily be used to measure the optical properties of general turbid media using FD-PAs.

  10. Axial linear patellar displacement: a new measurement of patellofemoral congruence.

    PubMed

    Urch, Scott E; Tritle, Benjamin A; Shelbourne, K Donald; Gray, Tinker

    2009-05-01

    The tools for measuring the congruence angle with digital radiography software can be difficult to use; therefore, the authors sought to develop a new, easy, and reliable method for measuring patellofemoral congruence. The abstract goes here and covers two columns. The abstract goes The linear displacement measurement will correlate well with the congruence angle measurement. here and covers two columns. Cohort study (diagnosis); Level of evidence, 2. On Merchant view radiographs obtained digitally, the authors measured the congruence angle and a new linear displacement measurement on preoperative and postoperative radiographs of 31 patients who suffered unilateral patellar dislocations and 100 uninjured subjects. The linear displacement measurement was obtained by drawing a reference line across the medial and lateral trochlear facets. Perpendicular lines were drawn from the depth of the sulcus through the reference line and from the apex of the posterior tip of the patella through the reference line. The distance between the perpendicular lines was the linear displacement measurement. The measurements were obtained twice at different sittings. The observer was blinded as to the previous measurements to establish reliability. Measurements were compared to determine whether the linear displacement measurement correlated with congruence angle. Intraobserver reliability was above r(2) = .90 for all measurements. In patients with patellar dislocations, the mean congruence angle preoperatively was 33.5 degrees , compared with 12.1 mm for linear displacement (r(2) = .92). The mean congruence angle postoperatively was 11.2 degrees, compared with 4.0 mm for linear displacement (r(2) = .89). For normal subjects, the mean congruence angle was -3 degrees and the mean linear displacement was 0.2 mm. The linear displacement measurement was found to correlate with congruence angle measurements and may be an easy and useful tool for clinicians to evaluate patellofemoral congruence objectively.

  11. Linear Augmentation for Stabilizing Stationary Solutions: Potential Pitfalls and Their Application

    PubMed Central

    Karnatak, Rajat

    2015-01-01

    Linear augmentation has recently been shown to be effective in targeting desired stationary solutions, suppressing bistablity, in regulating the dynamics of drive response systems and in controlling the dynamics of hidden attractors. The simplicity of the procedure is the main highlight of this scheme but questions related to its general applicability still need to be addressed. Focusing on the issue of targeting stationary solutions, this work demonstrates instances where the scheme fails to stabilize the required solutions and leads to other complicated dynamical scenarios. Examples from conservative as well as dissipative systems are presented in this regard and important applications in dissipative predator—prey systems are discussed, which include preventative measures to avoid potentially catastrophic dynamical transitions in these systems. PMID:26544879

  12. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  13. Granger-causality maps of diffusion processes.

    PubMed

    Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A

    2016-02-01

    Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.

  14. Entanglement of Multi-qudit States Constructed by Linearly Independent Coherent States: Balanced Case

    NASA Astrophysics Data System (ADS)

    Najarbashi, G.; Mirzaei, S.

    2016-03-01

    Multi-mode entangled coherent states are important resources for linear optics quantum computation and teleportation. Here we introduce the generalized balanced N-mode coherent states which recast in the multi-qudit case. The necessary and sufficient condition for bi-separability of such balanced N-mode coherent states is found. We particularly focus on pure and mixed multi-qubit and multi-qutrit like states and examine the degree of bipartite as well as tripartite entanglement using the concurrence measure. Unlike the N-qubit case, it is shown that there are qutrit states violating monogamy inequality. Using parity, displacement operator and beam splitters, we will propose a scheme for generating balanced N-mode entangled coherent states for even number of terms in superposition.

  15. Numerical study of a multigrid method with four smoothing methods for the incompressible Navier-Stokes equations in general coordinates

    NASA Technical Reports Server (NTRS)

    Zeng, S.; Wesseling, P.

    1993-01-01

    The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.

  16. A Few New 2+1-Dimensional Nonlinear Dynamics and the Representation of Riemann Curvature Tensors

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Zhang, Yufeng; Zhang, Xiangzhi

    2016-09-01

    We first introduced a linear stationary equation with a quadratic operator in ∂x and ∂y, then a linear evolution equation is given by N-order polynomials of eigenfunctions. As applications, by taking N=2, we derived a (2+1)-dimensional generalized linear heat equation with two constant parameters associative with a symmetric space. When taking N=3, a pair of generalized Kadomtsev-Petviashvili equations with the same eigenvalues with the case of N=2 are generated. Similarly, a second-order flow associative with a homogeneous space is derived from the integrability condition of the two linear equations, which is a (2+1)-dimensional hyperbolic equation. When N=3, the third second flow associative with the homogeneous space is generated, which is a pair of new generalized Kadomtsev-Petviashvili equations. Finally, as an application of a Hermitian symmetric space, we established a pair of spectral problems to obtain a new (2+1)-dimensional generalized Schrödinger equation, which is expressed by the Riemann curvature tensors.

  17. Next Linear Collider Home Page

    Science.gov Websites

    Welcome to the Next Linear Collider NLC Home Page If you would like to learn about linear colliders in general and about this next-generation linear collider project's mission, design ideas, and Linear Collider. line | NLC Home | NLC Technical | SLAC | mcdunn Tuesday, February 14, 2006 01:32:11 PM

  18. Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase

    NASA Technical Reports Server (NTRS)

    Born, G. J.; Kai, T.

    1975-01-01

    A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.

  19. Control of Distributed Parameter Systems

    DTIC Science & Technology

    1990-08-01

    vari- ant of the general Lotka - Volterra model for interspecific competition. The variant described the emergence of one subpopulation from another as a...distribut ion unlimited. I&. ARSTRACT (MAUMUnw2O1 A unified arioroximation framework for Parameter estimation In general linear POE models has been completed...unified approximation framework for parameter estimation in general linear PDE models. This framework has provided the theoretical basis for a number of

  20. Contract and ownership type of general practices and patient experience in England: multilevel analysis of a national cross-sectional survey

    PubMed Central

    Laverty, Anthony A; Harris, Matthew J; Watt, Hilary C; Greaves, Felix; Majeed, Azeem

    2017-01-01

    Objective To examine associations between the contract and ownership type of general practices and patient experience in England. Design Multilevel linear regression analysis of a national cross-sectional patient survey (General Practice Patient Survey). Setting All general practices in England in 2013–2014 (n = 8017). Participants 903,357 survey respondents aged 18 years or over and registered with a general practice for six months or more (34.3% of 2,631,209 questionnaires sent). Main outcome measures Patient reports of experience across five measures: frequency of consulting a preferred doctor; ability to get a convenient appointment; rating of doctor communication skills; ease of contacting the practice by telephone; and overall experience (measured on four- or five-level interval scales from 0 to 100). Models adjusted for demographic and socioeconomic characteristics of respondents and general practice populations and a random intercept for each general practice. Results Most practices had a centrally negotiated contract with the UK government (‘General Medical Services’ 54.6%; 4337/7949). Few practices were limited companies with locally negotiated ‘Alternative Provider Medical Services’ contracts (1.2%; 98/7949); these practices provided worse overall experiences than General Medical Services practices (adjusted mean difference −3.04, 95% CI −4.15 to −1.94). Associations were consistent in direction across outcomes and largest in magnitude for frequency of consulting a preferred doctor (−12.78, 95% CI −15.17 to −10.39). Results were similar for practices owned by large organisations (defined as having ≥20 practices) which were uncommon (2.2%; 176/7949). Conclusions Patients registered to general practices owned by limited companies, including large organisations, reported worse experiences of their care than other patients in 2013–2014. PMID:29096580

  1. Contract and ownership type of general practices and patient experience in England: multilevel analysis of a national cross-sectional survey.

    PubMed

    Cowling, Thomas E; Laverty, Anthony A; Harris, Matthew J; Watt, Hilary C; Greaves, Felix; Majeed, Azeem

    2017-11-01

    Objective To examine associations between the contract and ownership type of general practices and patient experience in England. Design Multilevel linear regression analysis of a national cross-sectional patient survey (General Practice Patient Survey). Setting All general practices in England in 2013-2014 ( n = 8017). Participants 903,357 survey respondents aged 18 years or over and registered with a general practice for six months or more (34.3% of 2,631,209 questionnaires sent). Main outcome measures Patient reports of experience across five measures: frequency of consulting a preferred doctor; ability to get a convenient appointment; rating of doctor communication skills; ease of contacting the practice by telephone; and overall experience (measured on four- or five-level interval scales from 0 to 100). Models adjusted for demographic and socioeconomic characteristics of respondents and general practice populations and a random intercept for each general practice. Results Most practices had a centrally negotiated contract with the UK government ('General Medical Services' 54.6%; 4337/7949). Few practices were limited companies with locally negotiated 'Alternative Provider Medical Services' contracts (1.2%; 98/7949); these practices provided worse overall experiences than General Medical Services practices (adjusted mean difference -3.04, 95% CI -4.15 to -1.94). Associations were consistent in direction across outcomes and largest in magnitude for frequency of consulting a preferred doctor (-12.78, 95% CI -15.17 to -10.39). Results were similar for practices owned by large organisations (defined as having ≥20 practices) which were uncommon (2.2%; 176/7949). Conclusions Patients registered to general practices owned by limited companies, including large organisations, reported worse experiences of their care than other patients in 2013-2014.

  2. Multiparameter Estimation in Networked Quantum Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  3. Multiparameter Estimation in Networked Quantum Sensors

    DOE PAGES

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-21

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  4. Evaluation of expansion algorithm of measurement range suited for 3D shape measurement using two pitches of projected grating with light source-stepping method

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Toshimasa; Fujigaki, Motoharu; Murata, Yorinobu

    2015-03-01

    Accurate and wide-range shape measurement method is required in industrial field. The same technique is possible to be used for a shape measurement of a human body for the garment industry. Compact 3D shape measurement equipment is also required for embedding in the inspection system. A shape measurement by a phase shifting method can measure the shape with high spatial resolution because the coordinates can be obtained pixel by pixel. A key-device to develop compact equipment is a grating projector. Authors developed a linear LED projector and proposed a light source stepping method (LSSM) using the linear LED projector. The shape measurement euipment can be produced with low-cost and compact without any phase-shifting mechanical systems by using this method. Also it enables us to measure 3D shape in very short time by switching the light sources quickly. A phase unwrapping method is necessary to widen the measurement range with constant accuracy for phase shifting method. A general phase unwrapping method with difference grating pitches is often used. It is one of a simple phase unwrapping method. It is, however, difficult to apply the conventional phase unwrapping algorithm to the LSSM. Authors, therefore, developed an expansion unwrapping algorithm for the LSSM. In this paper, an expansion algorithm of measurement range suited for 3D shape measurement using two pitches of projected grating with the LSSM was evaluated.

  5. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Consistent linearization of the element-independent corotational formulation for the structural analysis of general shells

    NASA Technical Reports Server (NTRS)

    Rankin, C. C.

    1988-01-01

    A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.

  7. Generalized prolate spheroidal wave functions for optical finite fractional Fourier and linear canonical transforms.

    PubMed

    Pei, Soo-Chang; Ding, Jian-Jiun

    2005-03-01

    Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.

  8. Eddy current gauge for monitoring displacement using printed circuit coil

    DOEpatents

    Visioli, Jr., Armando J.

    1977-01-01

    A proximity detection system for non-contact displacement and proximity measurement of static or dynamic metallic or conductive surfaces is provided wherein the measurement is obtained by monitoring the change in impedance of a flat, generally spiral-wound, printed circuit coil which is excited by a constant current, constant frequency source. The change in impedance, which is detected as a corresponding change in voltage across the coil, is related to the eddy current losses in the distant conductive material target. The arrangement provides for considerable linear displacement range with increased accuracies, stability, and sensitivity over the entire range.

  9. Projection of two biphoton qutrits onto a maximally entangled state.

    PubMed

    Halevy, A; Megidish, E; Shacham, T; Dovrat, L; Eisenberg, H S

    2011-04-01

    Bell state measurements, in which two quantum bits are projected onto a maximally entangled state, are an essential component of quantum information science. We propose and experimentally demonstrate the projection of two quantum systems with three states (qutrits) onto a generalized maximally entangled state. Each qutrit is represented by the polarization of a pair of indistinguishable photons-a biphoton. The projection is a joint measurement on both biphotons using standard linear optics elements. This demonstration enables the realization of quantum information protocols with qutrits, such as teleportation and entanglement swapping. © 2011 American Physical Society

  10. State-Dependent Pseudo-Linear Filter for Spacecraft Attitude and Rate Estimation

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2001-01-01

    This paper presents the development and performance of a special algorithm for estimating the attitude and angular rate of a spacecraft. The algorithm is a pseudo-linear Kalman filter, which is an ordinary linear Kalman filter that operates on a linear model whose matrices are current state estimate dependent. The nonlinear rotational dynamics equation of the spacecraft is presented in the state space as a state-dependent linear system. Two types of measurements are considered. One type is a measurement of the quaternion of rotation, which is obtained from a newly introduced star tracker based apparatus. The other type of measurement is that of vectors, which permits the use of a variety of vector measuring sensors like sun sensors and magnetometers. While quaternion measurements are related linearly to the state vector, vector measurements constitute a nonlinear function of the state vector. Therefore, in this paper, a state-dependent linear measurement equation is developed for the vector measurement case. The state-dependent pseudo linear filter is applied to simulated spacecraft rotations and adequate estimates of the spacecraft attitude and rate are obtained for the case of quaternion measurements as well as of vector measurements.

  11. Comparative Design, Modeling, and Control Analysis of Robotic Transmissions

    DTIC Science & Technology

    1990-08-01

    Stiffening transmission behaviors are shown to be of a conditionally stabilizing nature, while also reducing the dynamic range of impedance- and torque...A closer look. 228 (f) REDEX Cycloidal Gear Reducer - A closer look. 234 (g) Brushless DC Sensorimotors. 239 (4.4) Conclusions 244 (4.4.1) General...the environment profile with the bearing-follower and the rocker arm. Figure 5.2: Experimental Torque Linearity of Brushless DC Motor - Measured vs

  12. Real-tiem Adaptive Control Scheme for Superior Plasma Confinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander Trunov, Ph.D.

    2001-06-01

    During this Phase I project, IOS, in collaboration with our subcontractors at General Atomics, Inc., acquired and analyzed measurement data on various plasma equilibrium modes. We developed a Matlab-based toolbox consisting of linear and neural network approximators that are capable of learning and predicting, with accuracy, the behavior of plasma parameters. We also began development of the control algorithm capable of using the model of the plasma obtained by the neural network approximator.

  13. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  14. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  15. On Generalizations of Cochran’s Theorem and Projection Matrices.

    DTIC Science & Technology

    1980-08-01

    Definiteness of the Estimated Dispersion Matrix in a Multivariate Linear Model ," F. Pukelsheim and George P.H. Styan, May 1978. TECHNICAL REPORTS...with applications to the analysis of covariance," Proc. Cambridge Philos. Soc., 30, pp. 178-191. Graybill , F. A. and Marsaglia, G. (1957...34Idempotent matrices and quad- ratic forms in the general linear hypothesis," Ann. Math. Statist., 28, pp. 678-686. Greub, W. (1975). Linear Algebra (4th ed

  16. Generalized massive optimal data compression

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  17. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  18. Optical soliton solutions of the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term

    NASA Astrophysics Data System (ADS)

    Kaplan, Melike; Hosseini, Kamyar; Samadani, Farzan; Raza, Nauman

    2018-07-01

    A wide range of problems in different fields of the applied sciences especially non-linear optics is described by non-linear Schrödinger's equations (NLSEs). In the present paper, a specific type of NLSEs known as the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term has been studied. The generalized Kudryashov method along with symbolic computation package has been exerted to carry out this objective. As a consequence, a series of optical soliton solutions have formally been retrieved. It is corroborated that the generalized form of Kudryashov method is a direct, effectual, and reliable technique to deal with various types of non-linear Schrödinger's equations.

  19. A general framework of noise suppression in material decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu

    Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less

  20. Gas concentration measurement instrument based on the effects of a wave-mixing interference on stimulated emissions

    DOEpatents

    Garrett, W. Ray

    1997-01-01

    A method and apparatus for measuring partial pressures of gaseous components within a mixture. The apparatus comprises generally at least one tunable laser source, a beam splitter, mirrors, optical filter, an optical spectrometer, and a data recorder. Measured in the forward direction along the path of the laser, the intensity of the emission spectra of the gaseous component, at wavelengths characteristic of the gas component being measured, are suppressed. Measured in the backward direction, the peak intensities characteristic of a given gaseous component will be wavelength shifted. These effects on peak intensity wavelengths are linearly dependent on the partial pressure of the compound being measured, but independent of the partial pressures of other gases which are present within the sample. The method and apparatus allow for efficient measurement of gaseous components.

  1. Gas concentration measurement instrument based on the effects of a wave-mixing interference on stimulated emissions

    DOEpatents

    Garrett, W.R.

    1997-11-11

    A method and apparatus are disclosed for measuring partial pressures of gaseous components within a mixture. The apparatus comprises generally at least one tunable laser source, a beam splitter, mirrors, optical filter, an optical spectrometer, and a data recorder. Measured in the forward direction along the path of the laser, the intensity of the emission spectra of the gaseous component, at wavelengths characteristic of the gas component being measured, are suppressed. Measured in the backward direction, the peak intensities characteristic of a given gaseous component will be wavelength shifted. These effects on peak intensity wavelengths are linearly dependent on the partial pressure of the compound being measured, but independent of the partial pressures of other gases which are present within the sample. The method and apparatus allow for efficient measurement of gaseous components. 9 figs.

  2. The microcomputer scientific software series 2: general linear model--regression.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  3. Performance evaluation of matrix gradient coils.

    PubMed

    Jia, Feng; Schultz, Gerrit; Testud, Frederik; Welz, Anna Masako; Weber, Hans; Littin, Sebastian; Yu, Huijun; Hennig, Jürgen; Zaitsev, Maxim

    2016-02-01

    In this paper, we present a new performance measure of a matrix coil (also known as multi-coil) from the perspective of efficient, local, non-linear encoding without explicitly considering target encoding fields. An optimization problem based on a joint optimization for the non-linear encoding fields is formulated. Based on the derived objective function, a figure of merit of a matrix coil is defined, which is a generalization of a previously known resistive figure of merit for traditional gradient coils. A cylindrical matrix coil design with a high number of elements is used to illustrate the proposed performance measure. The results are analyzed to reveal novel features of matrix coil designs, which allowed us to optimize coil parameters, such as number of coil elements. A comparison to a scaled, existing multi-coil is also provided to demonstrate the use of the proposed performance parameter. The assessment of a matrix gradient coil profits from using a single performance parameter that takes the local encoding performance of the coil into account in relation to the dissipated power.

  4. Feasibility and acceptability of cell phone diaries to measure HIV risk behavior among female sex workers.

    PubMed

    Roth, Alexis M; Hensel, Devon J; Fortenberry, J Dennis; Garfein, Richard S; Gunn, Jayleen K L; Wiehe, Sarah E

    2014-12-01

    Individual, social, and structural factors affecting HIV risk behaviors among female sex workers (FSWs) are difficult to assess using retrospective surveys methods. To test the feasibility and acceptability of cell phone diaries to collect information about sexual events, we recruited 26 FSWs in Indianapolis, Indiana (US). Over 4 weeks, FSWs completed twice daily digital diaries about their mood, drug use, sexual interactions, and daily activities. Feasibility was assessed using repeated measures general linear modeling and descriptive statistics examined event-level contextual information and acceptability. Of 1,420 diaries expected, 90.3 % were completed by participants and compliance was stable over time (p > .05 for linear trend). Sexual behavior was captured in 22 % of diaries and participant satisfaction with diary data collection was high. These data provide insight into event-level factors impacting HIV risk among FSWs. We discuss implications for models of sexual behavior and individually tailored interventions to prevent HIV in this high-risk group.

  5. T-matrix modeling of linear depolarization by morphologically complex soot and soot-containing aerosols

    NASA Astrophysics Data System (ADS)

    Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2013-07-01

    We use state-of-the-art public-domain Fortran codes based on the T-matrix method to calculate orientation and ensemble averaged scattering matrix elements for a variety of morphologically complex black carbon (BC) and BC-containing aerosol particles, with a special emphasis on the linear depolarization ratio (LDR). We explain theoretically the quasi-Rayleigh LDR peak at side-scattering angles typical of low-density soot fractals and conclude that the measurement of this feature enables one to evaluate the compactness state of BC clusters and trace the evolution of low-density fluffy fractals into densely packed aggregates. We show that small backscattering LDRs measured with ground-based, airborne, and spaceborne lidars for fresh smoke generally agree with the values predicted theoretically for fluffy BC fractals and densely packed near-spheroidal BC aggregates. To reproduce higher lidar LDRs observed for aged smoke, one needs alternative particle models such as shape mixtures of BC spheroids or cylinders.

  6. A road map for multi-way calibration models.

    PubMed

    Escandar, Graciela M; Olivieri, Alejandro C

    2017-08-07

    A large number of experimental applications of multi-way calibration are known, and a variety of chemometric models are available for the processing of multi-way data. While the main focus has been directed towards three-way data, due to the availability of various instrumental matrix measurements, a growing number of reports are being produced on order signals of increasing complexity. The purpose of this review is to present a general scheme for selecting the appropriate data processing model, according to the properties exhibited by the multi-way data. In spite of the complexity of the multi-way instrumental measurements, simple criteria can be proposed for model selection, based on the presence and number of the so-called multi-linearity breaking modes (instrumental modes that break the low-rank multi-linearity of the multi-way arrays), and also on the existence of mutually dependent instrumental modes. Recent literature reports on multi-way calibration are reviewed, with emphasis on the models that were selected for data processing.

  7. T-Matrix Modeling of Linear Depolarization by Morphologically Complex Soot and Soot-Containing Aerosols

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2013-01-01

    We use state-of-the-art public-domain Fortran codes based on the T-matrix method to calculate orientation and ensemble averaged scattering matrix elements for a variety of morphologically complex black carbon (BC) and BC-containing aerosol particles, with a special emphasis on the linear depolarization ratio (LDR). We explain theoretically the quasi-Rayleigh LDR peak at side-scattering angles typical of low-density soot fractals and conclude that the measurement of this feature enables one to evaluate the compactness state of BC clusters and trace the evolution of low-density fluffy fractals into densely packed aggregates. We show that small backscattering LDRs measured with groundbased, airborne, and spaceborne lidars for fresh smoke generally agree with the values predicted theoretically for fluffy BC fractals and densely packed near-spheroidal BC aggregates. To reproduce higher lidar LDRs observed for aged smoke, one needs alternative particle models such as shape mixtures of BC spheroids or cylinders.

  8. The Riesz-Radon-Fréchet problem of characterization of integrals

    NASA Astrophysics Data System (ADS)

    Zakharov, Valerii K.; Mikhalev, Aleksandr V.; Rodionov, Timofey V.

    2010-11-01

    This paper is a survey of results on characterizing integrals as linear functionals. It starts from the familiar result of F. Riesz (1909) on integral representation of bounded linear functionals by Riemann-Stieltjes integrals on a closed interval, and is directly connected with Radon's famous theorem (1913) on integral representation of bounded linear functionals by Lebesgue integrals on a compact subset of {R}^n. After the works of Radon, Fréchet, and Hausdorff, the problem of characterizing integrals as linear functionals took the particular form of the problem of extending Radon's theorem from {R}^n to more general topological spaces with Radon measures. This problem turned out to be difficult, and its solution has a long and rich history. Therefore, it is natural to call it the Riesz-Radon-Fréchet problem of characterization of integrals. Important stages of its solution are associated with such eminent mathematicians as Banach (1937-1938), Saks (1937-1938), Kakutani (1941), Halmos (1950), Hewitt (1952), Edwards (1953), Prokhorov (1956), Bourbaki (1969), and others. Essential ideas and technical tools were developed by A.D. Alexandrov (1940-1943), Stone (1948-1949), Fremlin (1974), and others. Most of this paper is devoted to the contemporary stage of the solution of the problem, connected with papers of König (1995-2008), Zakharov and Mikhalev (1997-2009), and others. The general solution of the problem is presented in the form of a parametric theorem on characterization of integrals which directly implies the characterization theorems of the indicated authors. Bibliography: 60 titles.

  9. Multi-disease analysis of maternal antibody decay using non-linear mixed models accounting for censoring.

    PubMed

    Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel

    2015-09-10

    Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Paradoxical Trend for Improvement in Mental Health with Aging: A Community-Based Study of 1,546 Adults Aged 21–100 Years

    PubMed Central

    Thomas, Michael L.; Kaufmann, Christopher N.; Palmer, Barton W.; Depp, Colin A.; Martin, Averria Sirkin; Glorioso, Danielle K.; Thompson, Wesley K.; Jeste, Dilip V.

    2017-01-01

    Objective Studies of aging usually focus on trajectories of physical and cognitive function, with far less emphasis on overall mental health, despite its impact on general health and mortality. This study examined linear and non-linear trends of physical, cognitive, and mental health over the entire adult lifespan. Method Cross-sectional data were obtained from 1,546 individuals aged 21 to 100 years, selected using random digit dialing for the Successful AGing Evaluation (SAGE) study, a structured multi-cohort investigation, that included telephone interviews and in-home surveys of community-based adults without dementia. Data were collected from 1/26/2010 to 10/07/2011 targeting participants aged 50 to 100 years, and 6/25/2012 to 7/15/2013 targeting participants aged 21 to 50 years. Data included self-report measures of physical health, measures of both positive and negative attributes of mental health, and a phone interview-based measure of cognition. Results Comparison of age cohorts using polynomial regression suggested a possible accelerated deterioration in physical and cognitive functioning, averaging one-and-a-half to two standard deviations over the adult lifespan. In contrast, there appeared to be a linear improvement of about one standard deviation in various attributes of mental health over the same life period. Conclusion These cross-sectional findings suggest the possibility of a linear improvement in mental health beginning in young adulthood rather than a U-shaped curve reported in some prior studies. Lifespan research combining psychosocial and biological markers may improve our understanding of resilience to mental disability in older age, and lead to broad-based interventions promoting mental health in all age groups. PMID:27561149

  11. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  12. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data

    PubMed Central

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin

    2016-01-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324

  13. Using air/water/sediment temperature contrasts to identify groundwater seepage locations in small streams

    NASA Astrophysics Data System (ADS)

    Karan, S.; Sebok, E.; Engesgaard, P. K.

    2016-12-01

    For identifying groundwater seepage locations in small streams within a headwater catchment, we present a method expanding on the linear regression of air and stream temperatures. Thus, by measuring the temperatures in dual-depth; in the stream column and at the streambed-water interface (SWI), we apply metrics from linear regression analysis of temperatures between air/stream and air/SWI (linear regression slope, intercept and coefficient of determination), and the daily mean temperatures (temperature variance and the average difference between the minimum and maximum daily temperatures). Our study show that using metrics from single-depth stream temperature measurements only are not sufficient to identify substantial groundwater seepage locations within a headwater stream. Conversely, comparing the metrics from dual-depth temperatures show significant differences so that at groundwater seepage locations, temperatures at the SWI, merely explain 43-75 % of the variation opposed to ≥91 % at the corresponding stream column temperatures. The figure showing a box-plot of the variation in daily mean temperature depict that at several locations there is great variation in the range the upper and lower loggers due to groundwater seepage. In general, the linear regression show that at these locations at the SWI, the slopes (<0.25) and intercepts (>6.5oC) are substantially lower and higher, while the mean diel amplitudes (<0.98oC) are decreased compared to remaining locations. The dual-depth approach was applied in a post-glacial fluvial setting, where metrics analyses overall corresponded to field measurements of groundwater fluxes deduced from vertical streambed temperatures and stream flow accretions. Thus, we propose a method reliably identifying groundwater seepage locations along streambed in such settings.

  14. Road Traffic and Railway Noise Exposures and Adiposity in Adults: A Cross-Sectional Analysis of the Danish Diet, Cancer, and Health Cohort.

    PubMed

    Christensen, Jeppe Schultz; Raaschou-Nielsen, Ole; Tjønneland, Anne; Overvad, Kim; Nordsborg, Rikke B; Ketzel, Matthias; Sørensen, Thorkild Ia; Sørensen, Mette

    2016-03-01

    Traffic noise has been associated with cardiovascular and metabolic disorders. Potential modes of action are through stress and sleep disturbance, which may lead to endocrine dysregulation and overweight. We aimed to investigate the relationship between residential traffic and railway noise and adiposity. In this cross-sectional study of 57,053 middle-aged people, height, weight, waist circumference, and bioelectrical impedance were measured at enrollment (1993-1997). Body mass index (BMI), body fat mass index (BFMI), and lean body mass index (LBMI) were calculated. Residential exposure to road and railway traffic noise exposure was calculated using the Nordic prediction method. Associations between traffic noise and anthropometric measures at enrollment were analyzed using general linear models and logistic regression adjusted for demographic and lifestyle factors. Linear regression models adjusted for age, sex, and socioeconomic factors showed that 5-year mean road traffic noise exposure preceding enrollment was associated with a 0.35-cm wider waist circumference (95% CI: 0.21, 0.50) and a 0.18-point higher BMI (95% CI: 0.12, 0.23) per 10 dB. Small, significant increases were also found for BFMI and LBMI. All associations followed linear exposure-response relationships. Exposure to railway noise was not linearly associated with adiposity measures. However, exposure > 60 dB was associated with a 0.71-cm wider waist circumference (95% CI: 0.23, 1.19) and a 0.19-point higher BMI (95% CI: 0.0072, 0.37) compared with unexposed participants (0-20 dB). The present study finds positive associations between residential exposure to road traffic and railway noise and adiposity.

  15. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  16. The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models

    DTIC Science & Technology

    1988-07-27

    auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the

  17. Use of reflectance spectrophotometry and colorimetry in a general linear model for the determination of the age of bruises.

    PubMed

    Hughes, Vanessa K; Langlois, Neil E I

    2010-12-01

    Bruises can have medicolegal significance such that the age of a bruise may be an important issue. This study sought to determine if colorimetry or reflectance spectrophotometry could be employed to objectively estimate the age of bruises. Based on a previously described method, reflectance spectrophotometric scans were obtained from bruises using a Cary 100 Bio spectrophotometer fitted with a fibre-optic reflectance probe. Measurements were taken from the bruise and a control area. Software was used to calculate the first derivative at 490 and 480 nm; the proportion of oxygenated hemoglobin was calculated using an isobestic point method and a software application converted the scan data into colorimetry data. In addition, data on factors that might be associated with the determination of the age of a bruise: subject age, subject sex, degree of trauma, bruise size, skin color, body build, and depth of bruise were recorded. From 147 subjects, 233 reflectance spectrophotometry scans were obtained for analysis. The age of the bruises ranged from 0.5 to 231.5 h. A General Linear Model analysis method was used. This revealed that colorimetric measurement of the yellowness of a bruise accounted for 13% of the bruise age. By incorporation of the other recorded data (as above), yellowness could predict up to 32% of the age of a bruise-implying that 68% of the variation was dependent on other factors. However, critical appraisal of the model revealed that the colorimetry method of determining the age of a bruise was affected by skin tone and required a measure of the proportion of oxygenated hemoglobin, which is obtained by spectrophotometric methods. Using spectrophotometry, the first derivative at 490 nm alone accounted for 18% of the bruise age estimate. When additional factors (subject sex, bruise depth and oxygenation of hemoglobin) were included in the General Linear Model this increased to 31%-implying that 69% of the variation was dependent on other factors. This indicates that spectrophotometry would be of more use that colorimetry for assessing the age of bruises, but the spectrophotometric method used needs to be refined to provide useful data regarding the estimated age of a bruise. Such refinements might include the use of multiple readings or utilizing a comprehensive mathematical model of the optics of skin.

  18. Modeling soybean canopy resistance from micrometeorological and plant variables for estimating evapotranspiration using one-step Penman-Monteith approach

    NASA Astrophysics Data System (ADS)

    Irmak, Suat; Mutiibwa, Denis; Payero, Jose; Marek, Thomas; Porter, Dana

    2013-12-01

    Canopy resistance (rc) is one of the most important variables in evapotranspiration, agronomy, hydrology and climate change studies that link vegetation response to changing environmental and climatic variables. This study investigates the concept of generalized nonlinear/linear modeling approach of rc from micrometeorological and plant variables for soybean [Glycine max (L.) Merr.] canopy at different climatic zones in Nebraska, USA (Clay Center, Geneva, Holdrege and North Platte). Eight models estimating rc as a function of different combination of micrometeorological and plant variables are presented. The models integrated the linear and non-linear effects of regulating variables (net radiation, Rn; relative humidity, RH; wind speed, U3; air temperature, Ta; vapor pressure deficit, VPD; leaf area index, LAI; aerodynamic resistance, ra; and solar zenith angle, Za) to predict hourly rc. The most complex rc model has all regulating variables and the simplest model has only Rn, Ta and RH. The rc models were developed at Clay Center in the growing season of 2007 and applied to other independent sites and years. The predicted rc for the growing seasons at four locations were then used to estimate actual crop evapotranspiration (ETc) as a one-step process using the Penman-Monteith model and compared to the measured data at all locations. The models were able to account for 66-93% of the variability in measured hourly ETc across locations. Models without LAI generally underperformed and underestimated due to overestimation of rc, especially during full canopy cover stage. Using vapor pressure deficit or relative humidity in the models had similar effect on estimating rc. The root squared error (RSE) between measured and estimated ETc was about 0.07 mm h-1 for most of the models at Clay Center, Geneva and Holdrege. At North Platte, RSE was above 0.10 mm h-1. The results at different sites and different growing seasons demonstrate the robustness and consistency of the models in estimating soybean rc, which is encouraging towards the general application of one-step estimation of soybean canopy ETc in practice using the Penman-Monteith model and could aid in enhancing the utilization of the approach by irrigation and water management community.

  19. The theory of granular packings for coarse soils

    NASA Astrophysics Data System (ADS)

    Yanqui, Calixtro

    2013-06-01

    Coarse soils are substances made of grains of different shape, size and orientation. In this paper, new massive-measurable grain indexes are defined to develop a simple and systematic theory for the ideal packing of grains. First, a linear relationship between an assemblage of monodisperse spheres and an assemblage of polydisperse grains is deduced. Then, a general formula for the porosity of linearly ordered packings of spheres in contact is settled down by the appropriated choosing of eight neighboring spheres located at the vertices of the unit parallelepiped. The porosity of axisymmetric packings of grains, related to sand piles and axisymmetric compression tests, is proposed to be determined averaging the respective linear parameters. Since they can be tested experimentally, porosities of the densest state and the loosest state of a granular soil can be used to verify the accuracy of the present theory. Diagrams for these extreme quantities show a good agreement between the theoretical lines and the experimental data, no matter the dependency on the protocols and mineral composition.

  20. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  1. Modeling for CO poisoning of a fuel cell anode

    NASA Technical Reports Server (NTRS)

    Dhar, H. P.; Kush, A. K.; Patel, D. N.; Christner, L. G.

    1986-01-01

    Poisoning losses in a half-cell in the 110-190 C temperature range have been measured in 100 wt pct H3PO4 for various mixtures of H2, CO, and CO2 gases in order to investigate the polarization loss due to poisoning by CO of a porous fuel cell Pt anode. At a fixed current density, the poisoning loss was found to vary linearly with ln of the CO/H2 concentration ratio, although deviations from linearity were noted at lower temperatures and higher current densities for high CO/H2 concentration ratios. The surface coverages of CO were also found to vary linearly with ln of the CO/H2 concentration ratio. A general adsorption relationship is derived. Standard free energies for CO adsorption were found to vary from -14.5 to -12.1 kcal/mol in the 130-190 C temperature range. The standard entropy for CO adsorption was found to be -39 cal/mol per deg K.

  2. Identification of Linear and Nonlinear Sensory Processing Circuits from Spiking Neuron Data.

    PubMed

    Florescu, Dorian; Coca, Daniel

    2018-03-01

    Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms.

  3. Streamflow droughts in major watershed regions of the conterminous U.S.: Understanding evolution of historic patterns

    NASA Astrophysics Data System (ADS)

    Pournasiri Poshtiri, M.; Pal, I.

    2015-12-01

    Climate non-stationarity affects regional hydrological extremes. This research looks into historic patterns of streamflow drought indicators and their evolution for major watershed regions in the conterminous U.S. (CONUS). The results indicate general linear and non-linear drying trends, particularly in the last four decades, as opposed to wetting trends reported in previous studies. Regional differences in the trends are notable, and echo the local climatic changes documented in the recent National Climate Assessment (NCA). A reversal of linear trends is seen for some northern regions after 1980s. Patterns in return periods and corresponding return values of the indicators are also examined, which suggests changing risk conditions that are important for water-resources decision-making. Persistent or flash drought conditions in a river can lead to chronic or short-term water scarcity—a main driver of societal and cross-boundary conflicts. Thus, this research identifies "hotspot" locations where suitable adaptive management measures are most needed.

  4. Generalized t-statistic for two-group classification.

    PubMed

    Komori, Osamu; Eguchi, Shinto; Copas, John B

    2015-06-01

    In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.

  5. The Mechanisms of Aberrant Protein Aggregation

    NASA Astrophysics Data System (ADS)

    Cohen, Samuel; Vendruscolo, Michele; Dobson, Chris; Knowles, Tuomas

    2012-02-01

    We discuss the development of a kinetic theory for understanding the aberrant loss of solubility of proteins. The failure to maintain protein solubility results often in the assembly of organized linear structures, commonly known as amyloid fibrils, the formation of which is associated with over 50 clinical disorders including Alzheimer's and Parkinson's diseases. A true microscopic understanding of the mechanisms that drive these aggregation processes has proved difficult to achieve. To address this challenge, we apply the methodologies of chemical kinetics to the biomolecular self-assembly pathways related to protein aggregation. We discuss the relevant master equation and analytical approaches to studying it. In particular, we derive the underlying rate laws in closed-form using a self-consistent solution scheme; the solutions that we obtain reveal scaling behaviors that are very generally present in systems of growing linear aggregates, and, moreover, provide a general route through which to relate experimental measurements to mechanistic information. We conclude by outlining a study of the aggregation of the Alzheimer's amyloid-beta peptide. The study identifies the dominant microscopic mechanism of aggregation and reveals previously unidentified therapeutic strategies.

  6. Linear mixed model for heritability estimation that explicitly addresses environmental variation.

    PubMed

    Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S

    2016-07-05

    The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.

  7. Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-06-01

    The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z. The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.

  8. Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-04-01

    The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z . The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.

  9. Equivalence of linear canonical transform domains to fractional Fourier domains and the bicanonical width product: a generalization of the space-bandwidth product.

    PubMed

    Oktem, Figen S; Ozaktas, Haldun M

    2010-08-01

    Linear canonical transforms (LCTs) form a three-parameter family of integral transforms with wide application in optics. We show that LCT domains correspond to scaled fractional Fourier domains and thus to scaled oblique axes in the space-frequency plane. This allows LCT domains to be labeled and ordered by the corresponding fractional order parameter and provides insight into the evolution of light through an optical system modeled by LCTs. If a set of signals is highly confined to finite intervals in two arbitrary LCT domains, the space-frequency (phase space) support is a parallelogram. The number of degrees of freedom of this set of signals is given by the area of this parallelogram, which is equal to the bicanonical width product but usually smaller than the conventional space-bandwidth product. The bicanonical width product, which is a generalization of the space-bandwidth product, can provide a tighter measure of the actual number of degrees of freedom, and allows us to represent and process signals with fewer samples.

  10. A rigorous computational approach to linear response

    NASA Astrophysics Data System (ADS)

    Bahsoun, Wael; Galatolo, Stefano; Nisoli, Isaia; Niu, Xiaolong

    2018-03-01

    We present a general setting in which the formula describing the linear response of the physical measure of a perturbed system can be obtained. In this general setting we obtain an algorithm to rigorously compute the linear response. We apply our results to expanding circle maps. In particular, we present examples where we compute, up to a pre-specified error in the L∞ -norm, the response of expanding circle maps under stochastic and deterministic perturbations. Moreover, we present an example where we compute, up to a pre-specified error in the L 1-norm, the response of the intermittent family at the boundary; i.e. when the unperturbed system is the doubling map. This work was mainly conducted during a visit of SG to Loughborough University. WB and SG would like to thank The Leverhulme Trust for supporting mutual research visits through the Network Grant IN-2014-021. SG thanks the Department of Mathematical Sciences at Loughborough University for hospitality. WB thanks Dipartimento di Matematica, Universita di Pisa. The research of SG and IN is partially supported by EU Marie-Curie IRSES ‘Brazilian-European partnership in Dynamical Systems’ (FP7-PEOPLE-2012-IRSES 318999 BREUDS). IN was partially supported by CNPq and FAPERJ. IN would like to thank the Department of Mathematics at Uppsala University and the support of the KAW grant 2013.0315.

  11. Wave-induced hydraulic forces on submerged aquatic plants in shallow lakes.

    PubMed

    Schutten, J; Dainty, J; Davy, A J

    2004-03-01

    Hydraulic pulling forces arising from wave action are likely to limit the presence of freshwater macrophytes in shallow lakes, particularly those with soft sediments. The aim of this study was to develop and test experimentally simple models, based on linear wave theory for deep water, to predict such forces on individual shoots. Models were derived theoretically from the action of the vertical component of the orbital velocity of the waves on shoot size. Alternative shoot-size descriptors (plan-form area or dry mass) and alternative distributions of the shoot material along its length (cylinder or inverted cone) were examined. Models were tested experimentally in a flume that generated sinusoidal waves which lasted 1 s and were up to 0.2 m high. Hydraulic pulling forces were measured on plastic replicas of Elodea sp. and on six species of real plants with varying morphology (Ceratophyllum demersum, Chara intermedia, Elodea canadensis, Myriophyllum spicatum, Potamogeton natans and Potamogeton obtusifolius). Measurements on the plastic replicas confirmed predicted relationships between force and wave phase, wave height and plant submergence depth. Predicted and measured forces were linearly related over all combinations of wave height and submergence depth. Measured forces on real plants were linearly related to theoretically derived predictors of the hydraulic forces (integrals of the products of the vertical orbital velocity raised to the power 1.5 and shoot size). The general applicability of the simplified wave equations used was confirmed. Overall, dry mass and plan-form area performed similarly well as shoot-size descriptors, as did the conical or cylindrical models of shoot distribution. The utility of the modelling approach in predicting hydraulic pulling forces from relatively simple plant and environmental measurements was validated over a wide range of forces, plant sizes and species.

  12. Relative fluorescent efficiency of sodium salicylate between 90 and 800 eV

    NASA Technical Reports Server (NTRS)

    Angel, G. C.; Samson, J. A. R.; Wiliams, G.

    1986-01-01

    The relative fluorescent quantum efficiency of sodium salicylate was measured between 90 and 800 eV (138-15 A) by the use of synchrotron radiation. A general increase in efficiency was observed in this spectral range except for abrupt decreases in efficiency at the carbon and oxygen K-edges. Beyond the oxygen K-edge (532 eV) the efficiency increased linearly with the incident photon energy to the limit of the present observations.

  13. The relative fluorescent efficiency of sodium salicylate between 90 and 800 eV

    NASA Technical Reports Server (NTRS)

    Angel, G. C.; Samson, J. A. R.; Williams, G.

    1986-01-01

    The relative fluorescent quantum efficiency of sodium salicylate was measured between 90 and 800 eV (138 -15 A) by the use of synchrotron radiation. A general increase in efficiency was observed in this spectral range except for abrupt decreases in efficiency at the carbon and oxygen K-edges. Beyond the oxygen K-edge (532 eV) the efficiency increased linearly with the incident photon energy to the limit of the present observations.

  14. Effect of chlorides on solution corrosivity of methyldiethanolamine (MDEA) solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rooney, P.C.; Bacon, T.R.; DuPart, M.S.

    1997-08-01

    Solution corrosivity of MDEA/water solutions containing added HCl or NaCl have been measured by weight loss coupons at 250 F and by linear polarization resistance (LPR) at 208 F using carbon steel, 304SS, 316SS and 410SS. General corrosion as well as pitting or crevice corrosion tendencies were recorded for each species. Based on these results, recommendations are made for chlorides in MDEA that minimizes corrosion in gas treating operations.

  15. The cost of colorectal cancer according to the TNM stage.

    PubMed

    Mar, Javier; Errasti, Jose; Soto-Gordoa, Myriam; Mar-Barrutia, Gilen; Martinez-Llorente, José Miguel; Domínguez, Severina; García-Albás, Juan José; Arrospide, Arantzazu

    2017-02-01

    The aim of this study was to measure the cost of treatment of colorectal cancer in the Basque public health system according to the clinical stage. We retrospectively collected demographic data, clinical data and resource use of a sample of 529 patients. For stagesi toiii the initial and follow-up costs were measured. The calculation of cost for stageiv combined generalized linear models to relate the cost to the duration of follow-up based on parametric survival analysis. Unit costs were obtained from the analytical accounting system of the Basque Health Service. The sample included 110 patients with stagei, 171 with stageii, 158 with stageiii and 90 with stageiv colorectal cancer. The initial total cost per patient was 8,644€ for stagei, 12,675€ for stageii and 13,034€ for stageiii. The main component was hospitalization cost. Calculated by extrapolation for stageiv mean survival was 1.27years. Its average annual cost was 22,403€, and 24,509€ to death. The total annual cost for colorectal cancer extrapolated to the whole Spanish health system was 623.9million€. The economic burden of colorectal cancer is important and should be taken into account in decision-making. The combination of generalized linear models and survival analysis allows estimation of the cost of metastatic stage. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. Mode-independent attenuation in evanescent-field sensors

    NASA Astrophysics Data System (ADS)

    Gnewuch, Harald; Renner, Hagen

    1995-03-01

    Generally, the total power attenuation in multimode evanescent-field sensor waveguides is nonproportional to the bulk absorbance because the modal attenuation constants differ. Hence a direct measurement is difficult and is additionally aggravated because the waveguide absorbance is highly sensitive to the specific launching conditions at the waveguide input. A general asymptotic formula for the modal power attenuation in strongly asymmetric inhomogeneous planar waveguides with arbitrarily distributed weak absorption in the low-index superstrate is derived. Explicit expressions for typical refractive-index profiles are given. Except when very close to the cutoff, the predicted asymptotic attenuation behavior agrees well with exact calculations. The ratio of TM versus TE absorption has been derived to be (2 - n0 2/nf2 ) for arbitrary profiles. Waveguides with a linear refractive-index profile show mode-independent attenuation coefficients within each polarization. Further, the asymptotic sensitivity is independent of the wavelength, so that it should be possible to directly measure the spectral variation of the bulk absorption. The mode independence of the attenuation has been verified experimentally for a second-order polynomial profile, which is close to a linear refractive-index distribution. In contrast, the attenuation in the step-profile waveguide has been found to depend strongly on the mode number, as predicted by theory. A strong spread of the modal attenuation coefficients is also predicted for the parabolic-profile waveguide sensor.

  17. Linear Transforms for Fourier Data on the Sphere: Application to High Angular Resolution Diffusion MRI of the Brain

    PubMed Central

    Haldar, Justin P.; Leahy, Richard M.

    2013-01-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. PMID:23353603

  18. Optimizing the general linear model for functional near-infrared spectroscopy: an adaptive hemodynamic response function approach

    PubMed Central

    Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju

    2014-01-01

    Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973

  19. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    PubMed

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  20. Integrating multiple molecular sources into a clinical risk prediction signature by extracting complementary information.

    PubMed

    Hieke, Stefanie; Benner, Axel; Schlenl, Richard F; Schumacher, Martin; Bullinger, Lars; Binder, Harald

    2016-08-30

    High-throughput technology allows for genome-wide measurements at different molecular levels for the same patient, e.g. single nucleotide polymorphisms (SNPs) and gene expression. Correspondingly, it might be beneficial to also integrate complementary information from different molecular levels when building multivariable risk prediction models for a clinical endpoint, such as treatment response or survival. Unfortunately, such a high-dimensional modeling task will often be complicated by a limited overlap of molecular measurements at different levels between patients, i.e. measurements from all molecular levels are available only for a smaller proportion of patients. We propose a sequential strategy for building clinical risk prediction models that integrate genome-wide measurements from two molecular levels in a complementary way. To deal with partial overlap, we develop an imputation approach that allows us to use all available data. This approach is investigated in two acute myeloid leukemia applications combining gene expression with either SNP or DNA methylation data. After obtaining a sparse risk prediction signature e.g. from SNP data, an automatically selected set of prognostic SNPs, by componentwise likelihood-based boosting, imputation is performed for the corresponding linear predictor by a linking model that incorporates e.g. gene expression measurements. The imputed linear predictor is then used for adjustment when building a prognostic signature from the gene expression data. For evaluation, we consider stability, as quantified by inclusion frequencies across resampling data sets. Despite an extremely small overlap in the application example with gene expression and SNPs, several genes are seen to be more stably identified when taking the (imputed) linear predictor from the SNP data into account. In the application with gene expression and DNA methylation, prediction performance with respect to survival also indicates that the proposed approach might work well. We consider imputation of linear predictor values to be a feasible and sensible approach for dealing with partial overlap in complementary integrative analysis of molecular measurements at different levels. More generally, these results indicate that a complementary strategy for integrating different molecular levels can result in more stable risk prediction signatures, potentially providing a more reliable insight into the underlying biology.

  1. Polyethylene Naphthalate Scintillator: A Novel Detector for the Dosimetry of Radioactive Ophthalmic Applicators

    PubMed Central

    Flühs, Dirk; Flühs, Andrea; Ebenau, Melanie; Eichmann, Marion

    2015-01-01

    Background Dosimetric measurements in small radiation fields with large gradients, such as eye plaque dosimetry with β or low-energy photon emitters, require dosimetrically almost water-equivalent detectors with volumes of <1 mm3 and linear responses over several orders of magnitude. Polyvinyltoluene-based scintillators fulfil these conditions. Hence, they are a standard for such applications. However, they show disadvantages with regard to certain material properties and their dosimetric behaviour towards low-energy photons. Purpose, Materials and Methods Polyethylene naphthalate, recently recognized as a scintillator, offers chemical, physical and basic dosimetric properties superior to polyvinyltoluene. Its general applicability as a clinical dosimeter, however, has not been shown yet. To prove this applicability, extensive measurements at several clinical photon and electron radiation sources, ranging from ophthalmic plaques to a linear accelerator, were performed. Results For all radiation qualities under investigation, covering a wide range of dose rates, a linearity of the detector response to the dose was shown. Conclusion Polyethylene naphthalate proved to be a suitable detector material for the dosimetry of ophthalmic plaques, including low-energy photon emitters and other small radiation fields. Due to superior properties, it has the potential to replace polyvinyltoluene as the standard scintillator for such applications. PMID:27171681

  2. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

    PubMed

    Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

    2015-10-01

    In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Linear systems with structure group and their feedback invariants

    NASA Technical Reports Server (NTRS)

    Martin, C.; Hermann, R.

    1977-01-01

    A general method described by Hermann and Martin (1976) for the study of the feedback invariants of linear systems is considered. It is shown that this method, which makes use of ideas of topology and algebraic geometry, is very useful in the investigation of feedback problems for which the classical methods are not suitable. The transfer function as a curve in the Grassmanian is examined. The general concepts studied in the context of specific systems and applications are organized in terms of the theory of Lie groups and algebraic geometry. Attention is given to linear systems which have a structure group, linear mechanical systems, and feedback invariants. The investigation shows that Lie group techniques are powerful and useful tools for analysis of the feedback structure of linear systems.

  4. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  5. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    EPA Science Inventory

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  6. Posterior propriety for hierarchical models with log-likelihoods that have norm bounds

    DOE PAGES

    Michalak, Sarah E.; Morris, Carl N.

    2015-07-17

    Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less

  7. The general linear inverse problem - Implication of surface waves and free oscillations for earth structure.

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.

  8. Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

    PubMed Central

    Huang, Jian; Zhang, Cun-Hui

    2013-01-01

    The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100

  9. Generalized channeled polarimetry.

    PubMed

    Alenin, Andrey S; Tyo, J Scott

    2014-05-01

    Channeled polarimeters measure polarization by modulating the measured intensity in order to create polarization-dependent channels that can be demodulated to reveal the desired polarization information. A number of channeled systems have been described in the past, but their proposed designs often unintentionally sacrifice optimality for ease of algebraic reconstruction. To obtain more optimal systems, a generalized treatment of channeled polarimeters is required. This paper describes methods that enable handling of multi-domain modulations and reconstruction of polarization information using linear algebra. We make practical choices regarding use of either Fourier or direct channels to make these methods more immediately useful. Employing the introduced concepts to optimize existing systems often results in superficial system changes, like changing the order, orientation, thickness, or spacing of polarization elements. For the two examples we consider, we were able to reduce noise in the reconstruction to 34.1% and 57.9% of the original design values.

  10. Energy-momentum tensors in linearized Einstein's theory and massive gravity: The question of uniqueness

    NASA Astrophysics Data System (ADS)

    Bičák, Jiří; Schmidt, Josef

    2016-01-01

    The question of the uniqueness of energy-momentum tensors in the linearized general relativity and in the linear massive gravity is analyzed without using variational techniques. We start from a natural ansatz for the form of the tensor (for example, that it is a linear combination of the terms quadratic in the first derivatives), and require it to be conserved as a consequence of field equations. In the case of the linear gravity in a general gauge we find a four-parametric system of conserved second-rank tensors which contains a unique symmetric tensor. This turns out to be the linearized Landau-Lifshitz pseudotensor employed often in full general relativity. We elucidate the relation of the four-parametric system to the expression proposed recently by Butcher et al. "on physical grounds" in harmonic gauge, and we show that the results coincide in the case of high-frequency waves in vacuum after a suitable averaging. In the massive gravity we show how one can arrive at the expression which coincides with the "generalized linear symmetric Landau-Lifshitz" tensor. However, there exists another uniquely given simpler symmetric tensor which can be obtained by adding the divergence of a suitable superpotential to the canonical energy-momentum tensor following from the Fierz-Pauli action. In contrast to the symmetric tensor derived by the Belinfante procedure which involves the second derivatives of the field variables, this expression contains only the field and its first derivatives. It is simpler than the generalized Landau-Lifshitz tensor but both yield the same total quantities since they differ by the divergence of a superpotential. We also discuss the role of the gauge conditions in the proofs of the uniqueness. In the Appendix, the symbolic tensor manipulation software cadabra is briefly described. It is very effective in obtaining various results which would otherwise require lengthy calculations.

  11. A mathematical theory of learning control for linear discrete multivariable systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Longman, Richard W.

    1988-01-01

    When tracking control systems are used in repetitive operations such as robots in various manufacturing processes, the controller will make the same errors repeatedly. Here consideration is given to learning controllers that look at the tracking errors in each repetition of the process and adjust the control to decrease these errors in the next repetition. A general formalism is developed for learning control of discrete-time (time-varying or time-invariant) linear multivariable systems. Methods of specifying a desired trajectory (such that the trajectory can actually be performed by the discrete system) are discussed, and learning controllers are developed. Stability criteria are obtained which are relatively easy to use to insure convergence of the learning process, and proper gain settings are discussed in light of measurement noise and system uncertainties.

  12. Geologic and mineral and water resources investigations in western Colorado using ERTS-1 data

    NASA Technical Reports Server (NTRS)

    Knepper, D. H., Jr. (Principal Investigator); Hutchinson, R. M.; Sawatzky, D. L.; Trexler, D. W.; Bruns, D. L.; Nicolais, S. M.

    1973-01-01

    The author has identified the following significant results. Topography was found to be the most important factor defining folds on ERTS-1 imagery of northwestern Colorado; tonal variations caused by rock reflectance and vegetation type and density are the next most important factors. Photo-linears mapped on ERTS-1 imagery of central Colorado correlate well with ground-measured joint and fracture trends. In addition, photo-linears have been successfully used to determine the location and distribution of metallic mineral deposits in the Colorado Mineral Belt. True color composites are best for general geologic analysis and false color composites prepared with positive/negative masks are useful for enhancing local geologic phenomena. During geologic analysis of any given area, ERTS-1 imagery from several different dates should be studied.

  13. ISPAN (Interactive Stiffened Panel Analysis): A tool for quick concept evaluation and design trade studies

    NASA Technical Reports Server (NTRS)

    Hairr, John W.; Dorris, William J.; Ingram, J. Edward; Shah, Bharat M.

    1993-01-01

    Interactive Stiffened Panel Analysis (ISPAN) modules, written in FORTRAN, were developed to provide an easy to use tool for creating finite element models of composite material stiffened panels. The modules allow the user to interactively construct, solve and post-process finite element models of four general types of structural panel configurations using only the panel dimensions and properties as input data. Linear, buckling and post-buckling solution capability is provided. This interactive input allows rapid model generation and solution by non finite element users. The results of a parametric study of a blade stiffened panel are presented to demonstrate the usefulness of the ISPAN modules. Also, a non-linear analysis of a test panel was conducted and the results compared to measured data and previous correlation analysis.

  14. Perturbations of the Kerr black hole and the boundness of linear waves

    NASA Astrophysics Data System (ADS)

    Eskin, G.

    2010-11-01

    Artificial black holes (also called acoustic or optical black holes) are the black holes for the linear wave equation describing the wave propagation in a moving medium. They attracted a considerable interest of physicists who study them to better understand the black holes in general relativity. We consider the case of stationary axisymmetric metrics and we show that the Kerr black hole is not stable under perturbations in the class of all axisymmetric metrics. We describe families of axisymmetric metrics having black holes that are the perturbations of the Kerr black hole. We also show that the ergosphere can be determined by boundary measurements. Finally, we prove the uniform boundness of the solution in the exterior of the black hole when the event horizon coincides with the ergosphere.

  15. X-33 XRS-2200 Linear Aerospike Engine Sea Level Plume Radiation

    NASA Technical Reports Server (NTRS)

    DAgostino, Mark G.; Lee, Young C.; Wang, Ten-See; Turner, Jim (Technical Monitor)

    2001-01-01

    Wide band plume radiation data were collected during ten sea level tests of a single XRS-2200 engine at the NASA Stennis Space Center in 1999 and 2000. The XRS-2200 is a liquid hydrogen/liquid oxygen fueled, gas generator cycle linear aerospike engine which develops 204,420 lbf thrust at sea level. Instrumentation consisted of six hemispherical radiometers and one narrow view radiometer. Test conditions varied from 100% to 57% power level (PL) and 6.0 to 4.5 oxidizer to fuel (O/F) ratio. Measured radiation rates generally increased with engine chamber pressure and mixture ratio. One hundred percent power level radiation data were compared to predictions made with the FDNS and GASRAD codes. Predicted levels ranged from 42% over to 7% under average test values.

  16. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.

  17. Asymptotic aspect of derivations in Banach algebras.

    PubMed

    Roh, Jaiok; Chang, Ick-Soon

    2017-01-01

    We prove that every approximate linear left derivation on a semisimple Banach algebra is continuous. Also, we consider linear derivations on Banach algebras and we first study the conditions for a linear derivation on a Banach algebra. Then we examine the functional inequalities related to a linear derivation and their stability. We finally take central linear derivations with radical ranges on semiprime Banach algebras and a continuous linear generalized left derivation on a semisimple Banach algebra.

  18. The clustering of QSOs and the dark matter halos that host them

    NASA Astrophysics Data System (ADS)

    Zhao, Dong-Yao; Yan, Chang-Shuo; Lu, Youjun

    2013-10-01

    The spatial clustering of QSOs is an important measurable quantity which can be used to infer the properties of dark matter halos that host them. We construct a simple QSO model to explain the linear bias of QSOs measured by recent observations and explore the properties of dark matter halos that host a QSO. We assume that major mergers of dark matter halos can lead to the triggering of QSO phenomena, and the evolution of luminosity for a QSO generally shows two accretion phases, i.e., initially having a constant Eddington ratio due to the self-regulation of the accretion process when supply is sufficient, and then declining in rate with time as a power law due to either diminished supply or long term disk evolution. Using a Markov Chain Monte Carlo method, the model parameters are constrained by fitting the observationally determined QSO luminosity functions (LFs) in the hard X-ray and in the optical band simultaneously. Adopting the model parameters that best fit the QSO LFs, the linear bias of QSOs can be predicted and then compared with the observational measurements by accounting for various selection effects in different QSO surveys. We find that the latest measurements of the linear bias of QSOs from both the SDSS and BOSS QSO surveys can be well reproduced. The typical mass of SDSS QSOs at redshift 1.5 < z < 4.5 is ~ (3 - 6) × 1012 h-1 Msolar and the typical mass of BOSS QSOs at z ~ 2.4 is ~ 2 × 1012 h-1 Msolar. For relatively faint QSOs, the mass distribution of their host dark matter halos is wider than that of bright QSOs because faint QSOs can be hosted in both big halos and smaller halos, but bright QSOs are only hosted in big halos, which is part of the reason for the predicted weak dependence of the linear biases on the QSO luminosity.

  19. Spatial distribution of microbial biomass, activity, community structure, and the biodegradation of linear alkylbenzene sulfonate (LAS) and linear alcohol ethoxylate (LAE) in the subsurface.

    PubMed

    Federle, T W; Ventullo, R M; White, D C

    1990-12-01

    The vertical distribution of microbial biomass, activity, community structure and the mineralization of xenobiotic chemicals was examined in two soil profiles in northern Wisconsin. One profile was impacted by infiltrating wastewater from a laundromat, while the other served as a control. An unconfined aquifer was present 14 meters below the surface at both sites. Biomass and community structure were determined by acridine orange direct counts and measuring concentrations of phospholipid-derived fatty acids (PLFA). Microbial activity was estimated by measuring fluorescein diacetate (FDA) hydrolysis, thymidine incorporation into DNA, and mixed amino acid (MAA) mineralization. Mineralization kinetics of linear alkylbenzene sulfonate (LAS) and linear alcohol ethoxylate (LAE) were determined at each depth. Except for MAA mineralization rates, measures of microbial biomass and activity exhibited similar patterns with depth. PLFA concentration and rates of FDA hydrolysis and thymidine incorporation decreased 10-100 fold below 3 m and then exhibited little variation with depth. Fungal fatty acid markers were found at all depths and represented from 1 to 15% of the total PLFAs. The relative proportion of tuberculostearic acid (TBS), an actinomycete marker, declined with depth and was not detected in the saturated zone. The profile impacted by wastewater exhibited higher levels of PLFA but a lower proportion of TBS than the control profile. This profile also exhibited faster rates of FDA hydrolysis and amino acid mineralization at most depths. LAS was mineralized in the upper 2 m of the vadose zone and in the saturated zone of both profiles. Little or no LAS biodegradation occurred at depths between 2 and 14 m. LAE was mineralized at all depths in both profiles, and the mineralization rate exhibited a similar pattern with depth as biomass and activity measurements. In general, biomass and biodegradative activities were much lower in groundwater than in soil samples obtained from the same depth.

  20. Variable selection for marginal longitudinal generalized linear models.

    PubMed

    Cantoni, Eva; Flemming, Joanna Mills; Ronchetti, Elvezio

    2005-06-01

    Variable selection is an essential part of any statistical analysis and yet has been somewhat neglected in the context of longitudinal data analysis. In this article, we propose a generalized version of Mallows's C(p) (GC(p)) suitable for use with both parametric and nonparametric models. GC(p) provides an estimate of a measure of model's adequacy for prediction. We examine its performance with popular marginal longitudinal models (fitted using GEE) and contrast results with what is typically done in practice: variable selection based on Wald-type or score-type tests. An application to real data further demonstrates the merits of our approach while at the same time emphasizing some important robust features inherent to GC(p).

  1. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers' Adaptations.

    PubMed

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers' technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers' technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008-2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4-66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers' practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations.

  2. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers’ Adaptations

    PubMed Central

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers’ technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers’ technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008–2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4–66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers’ practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations. PMID:28900435

  3. Polarization effects in cutaneous autofluorescent spectra

    NASA Astrophysics Data System (ADS)

    Borisova, E.; Angelova, L.; Jeliazkova, Al.; Genova, Ts.; Pavlova, E.; Troyanova, P.; Avramov, L.

    2014-05-01

    Used polarized light for fluorescence excitation one could obtain response related to the anisotropy features of extracellular matrix. The fluorophore anisotropy is attenuated during lesions' growth and level of such decrease could be correlated with the stage of tumor development. Our preliminary investigations are based on in vivo point-by-point measurements of excitation-emission matrices (EEM) from healthy volunteers skin on different ages and from different anatomical places using linear polarizer and analyzer for excitation and emission light detected. Measurements were made using spectrofluorimeter FluoroLog 3 (HORIBA Jobin Yvon, France) with fiber-optic probe in steady-state regime using excitation in the region of 280-440 nm. Three different situations were evaluated and corresponding excitation-emission matrices were developed - with parallel and perpendicular positions for linear polarizer and analyzer, and without polarization of excitation and fluorescence light detected from a forearm skin surface. The fluorescence spectra obtained reveal differences in spectral intensity, related to general attenuation, due to filtering effects of used polarizer/analyzer couple. Significant spectral shape changes were observed for the complex autofluorescence signal detected, which correlated with collagen and protein cross-links fluorescence, that could be addressed to the tissue extracellular matrix and general condition of the skin investigated, due to morphological destruction during lesions' growth. A correlation between volunteers' age and the fluorescence spectra detected was observed during our measurements. Our next step is to increase developed initial database and to evaluate all sources of intrinsic fluorescent polarization effects and found if they are significantly altered from normal skin to cancerous state of the tissue, this way to develop a non-invasive diagnostic tool for dermatological practice.

  4. Protein fiber linear dichroism for structure determination and kinetics in a low-volume, low-wavelength couette flow cell.

    PubMed

    Dafforn, Timothy R; Rajendra, Jacindra; Halsall, David J; Serpell, Louise C; Rodger, Alison

    2004-01-01

    High-resolution structure determination of soluble globular proteins relies heavily on x-ray crystallography techniques. Such an approach is often ineffective for investigations into the structure of fibrous proteins as these proteins generally do not crystallize. Thus investigations into fibrous protein structure have relied on less direct methods such as x-ray fiber diffraction and circular dichroism. Ultraviolet linear dichroism has the potential to provide additional information on the structure of such biomolecular systems. However, existing systems are not optimized for the requirements of fibrous proteins. We have designed and built a low-volume (200 microL), low-wavelength (down to 180 nm), low-pathlength (100 microm), high-alignment flow-alignment system (couette) to perform ultraviolet linear dichroism studies on the fibers formed by a range of biomolecules. The apparatus has been tested using a number of proteins for which longer wavelength linear dichroism spectra had already been measured. The new couette cell has also been used to obtain data on two medically important protein fibers, the all-beta-sheet amyloid fibers of the Alzheimer's derived protein Abeta and the long-chain assemblies of alpha1-antitrypsin polymers.

  5. Issues in vibration energy harvesting

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Corr, Lawrence R.; Ma, Tianwei

    2018-05-01

    In this study, fundamental issues related to bandwidth and nonlinear resonance in vibrational energy harvesting devices are investigated. The results show that using bandwidth as a criterion to measure device performance can be misleading. For a linear device, an enlarged bandwidth is achieved at the cost of sacrificing device performance near resonance, and thus widening the bandwidth may offer benefits only when the natural frequency of the linear device cannot match the dominant excitation frequency. For a nonlinear device, since the principle of superposition does not apply, the ''broadband" performance improvements achieved for single-frequency excitations may not be achievable for multi-frequency excitations. It is also shown that a large-amplitude response based on the traditional ''nonlinear resonance" does not always result in the optimal performance for a nonlinear device because of the negative work done by the excitation, which indicates energy is returned back to the excitation. Such undesired negative work is eliminated at global resonance, a generalized resonant condition for both linear and nonlinear systems. While the linear resonance is a special case of global resonance for a single-frequency excitation, the maximum potential of nonlinear energy harvesting can be reached for multi-frequency excitations by using global resonance to simultaneously harvest energy distributed over multiple frequencies.

  6. General linear methods and friends: Toward efficient solutions of multiphysics problems

    NASA Astrophysics Data System (ADS)

    Sandu, Adrian

    2017-07-01

    Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..

  7. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    NASA Astrophysics Data System (ADS)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  8. Experimental determination of entanglement with a single measurement.

    PubMed

    Walborn, S P; Souto Ribeiro, P H; Davidovich, L; Mintert, F; Buchleitner, A

    2006-04-20

    Nearly all protocols requiring shared quantum information--such as quantum teleportation or key distribution--rely on entanglement between distant parties. However, entanglement is difficult to characterize experimentally. All existing techniques for doing so, including entanglement witnesses or Bell inequalities, disclose the entanglement of some quantum states but fail for other states; therefore, they cannot provide satisfactory results in general. Such methods are fundamentally different from entanglement measures that, by definition, quantify the amount of entanglement in any state. However, these measures suffer from the severe disadvantage that they typically are not directly accessible in laboratory experiments. Here we report a linear optics experiment in which we directly observe a pure-state entanglement measure, namely concurrence. Our measurement set-up includes two copies of a quantum state: these 'twin' states are prepared in the polarization and momentum degrees of freedom of two photons, and concurrence is measured with a single, local measurement on just one of the photons.

  9. Optimal and secure measurement protocols for quantum sensor networks

    NASA Astrophysics Data System (ADS)

    Eldredge, Zachary; Foss-Feig, Michael; Gross, Jonathan A.; Rolston, S. L.; Gorshkov, Alexey V.

    2018-04-01

    Studies of quantum metrology have shown that the use of many-body entangled states can lead to an enhancement in sensitivity when compared with unentangled states. In this paper, we quantify the metrological advantage of entanglement in a setting where the measured quantity is a linear function of parameters individually coupled to each qubit. We first generalize the Heisenberg limit to the measurement of nonlocal observables in a quantum network, deriving a bound based on the multiparameter quantum Fisher information. We then propose measurement protocols that can make use of Greenberger-Horne-Zeilinger (GHZ) states or spin-squeezed states and show that in the case of GHZ states the protocol is optimal, i.e., it saturates our bound. We also identify nanoscale magnetic resonance imaging as a promising setting for this technology.

  10. Esthetic Assessment of the Effect of Gingival Exposure in the Smile of Patients with Unilateral and Bilateral Maxillary Incisor Agenesis.

    PubMed

    Pinho, Teresa; Bellot-Arcís, Carlos; Montiel-Company, José María; Neves, Manuel

    2015-07-01

    The aim of this study was to determine the dental esthetic perception of the smile of patients with maxillary lateral incisor agenesis (MLIA); the perceptions were examined pre- and post-treatment. Esthetic determinations were made with regard to the gingival exposure in the patients' smile by orthodontists, general dentists, and laypersons. Three hundred eighty one people (80 orthodontists, 181 general dentists, 120 laypersons) rated the attractiveness of the smile in four cases before and after treatment, comprising two cases with unilateral MLIA and contralateral microdontia and two with bilateral MLIA. For each case, the buccal photograph was adjusted using a computer to apply standard lips to create high, medium, and low smiles. A numeric scale was used to measure the esthetic rating perceived by the judges. The resulting arithmetic means were compared using an ANOVA test, a linear trend, and a Student's t-test, applying a significance level of p < 0.05. The predictive capability of the variables, unilateral, or bilateral MLIA, symmetry of the treatment, gingival exposure of the smile, group, and gender were assessed using a multivariable linear regression model. In the pre- and post-treatment cases, medium smile photographs received higher scores than the same cases with high or low smiles, with significant differences between them. In all cases, orthodontists were the least-tolerant evaluation group (assigning lowest scores), followed by general dentists. In a predictive linear regression model, bilateral MLIA was the more predictive variable in pretreatment cases. The gingival exposure of the smile was a predictive variable in post-treatment cases only. The medium-height smile was considered to be more attractive. In all cases, orthodontists gave the lowest scores, followed by general dentists. Laypersons and male evaluators gave the highest scores. Symmetrical treatments scored higher than asymmetrical treatments. The gingival exposure had a significant influence on the esthetic perception of smiles in post-treatment cases. © 2014 by the American College of Prosthodontists.

  11. TI-59 Programs for Multiple Regression.

    DTIC Science & Technology

    1980-05-01

    general linear hypothesis model of full rank [ Graybill , 19611 can be written as Y = x 8 + C , s-N(O,o 2I) nxl nxk kxl nxl where Y is the vector of n...a "reduced model " solution, and confidence intervals for linear functions of the coefficients can be obtained using (x’x) and a2, based on the t...O107)l UA.LLL. Library ModuIe NASTER -Puter 0NTINA Cards 1 PROGRAM DESCRIPTION (s s 2 ror the general linear hypothesis model Y - XO + C’ calculates

  12. A Comparison between Linear IRT Observed-Score Equating and Levine Observed-Score Equating under the Generalized Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen

    2012-01-01

    In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…

  13. Statistical inference for template aging

    NASA Astrophysics Data System (ADS)

    Schuckers, Michael E.

    2006-04-01

    A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.

  14. A multiloop generalization of the circle criterion for stability margin analysis

    NASA Technical Reports Server (NTRS)

    Safonov, M. G.; Athans, M.

    1979-01-01

    In order to provide a theoretical tool suited for characterizing the stability margins of multiloop feedback systems, multiloop input-output stability results generalizing the circle stability criterion are considered. Generalized conic sectors with 'centers' and 'radii' determined by linear dynamical operators are employed to specify the stability margins as a frequency dependent convex set of modeling errors (including nonlinearities, gain variations and phase variations) which the system must be able to tolerate in each feedback loop without instability. The resulting stability criterion gives sufficient conditions for closed loop stability in the presence of frequency dependent modeling errors, even when the modeling errors occur simultaneously in all loops. The stability conditions yield an easily interpreted scalar measure of the amount by which a multiloop system exceeds, or falls short of, its stability margin specifications.

  15. Circuit-based versus full-wave modelling of active microwave circuits

    NASA Astrophysics Data System (ADS)

    Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.

    2018-03-01

    Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.

  16. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  17. Multivariate and repeated measures (MRM): A new toolbox for dependent and multimodal group-level neuroimaging data

    PubMed Central

    McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen

    2016-01-01

    Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. PMID:26921716

  18. Prediction of dynamical systems by symbolic regression

    NASA Astrophysics Data System (ADS)

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  19. Efficient polarimetric BRDF model.

    PubMed

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  20. Investigating pitting in X65 carbon steel using potentiostatic polarisation

    NASA Astrophysics Data System (ADS)

    Mohammed, Sikiru; Hua, Yong; Barker, R.; Neville, A.

    2017-11-01

    Although pitting corrosion in passive materials is generally well understood, the growth of surface pits in actively-corroding materials has received much less attention to date and remains poorly understood. One of the key challenges which exists is repeatedly and reliably generating surface pits in a practical time-frame in the absence of deformation and/or residual stress so that studies on pit propagation and healing can be performed. Another pertinent issue is how to evaluate pitting while addressing general corrosion in low carbon steel. In this work, potentiostatic polarisation was employed to induce corrosion pits (free from deformation or residual stress) on actively corroding X65 carbon steel. The influence of applied potential (50 mV, 100 mV and 150 mV vs open circuit potential) was investigated over 24 h in a CO2-saturated, 3.5 wt.% NaCl solution at 30 °C and pH 3.8. Scanning electron microscopy (SEM) was utilised to examine pits, while surface profilometry was conducted to measure pit depth as a function of applied potential over the range considered. Analyses of light pitting (up to 120 μm) revealed that pit depth increased linearly with increase in applied potential. This paper relates total pit volume (measured using white light interferometry) to dissipated charge or total mass loss (using the current response for potentiostatic polarisation in conjunction with Faraday's law). By controlling the potential of the surface (anodic) the extent of pitting and general corrosion could be controlled. This allowed pits to be evaluated for their ability to continue to propagate after the potentiostatic technique was employed. Linear growth from a depth of 70 μm at pH 3.8, 80 °C was demonstrated. The technique offers promise for the study of inhibition of pitting.

  1. Linear shaped charge

    DOEpatents

    Peterson, David; Stofleth, Jerome H.; Saul, Venner W.

    2017-07-11

    Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.

  2. Genetic parameters for racing records in trotters using linear and generalized linear models.

    PubMed

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  3. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

  4. Wave-front singularities for two-dimensional anisotropic elastic waves.

    NASA Technical Reports Server (NTRS)

    Payton, R. G.

    1972-01-01

    Wavefront singularities for the displacement functions, associated with the radiation of linear elastic waves from a point source embedded in a finitely strained two-dimensional elastic solid, are examined in detail. It is found that generally the singularities are of order d to the -1/2 power, where d measures distance away from the front. However, in certain exceptional cases singularities of order d to the -n power, where n = 1/4, 2/3, 3/4, may be encountered.

  5. Tsunami and acoustic-gravity waves in water of constant depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendin, Gali; Stiassnie, Michael

    2013-08-15

    A study of wave radiation by a rather general bottom displacement, in a compressible ocean of otherwise constant depth, is carried out within the framework of a three-dimensional linear theory. Simple analytic expressions for the flow field, at large distance from the disturbance, are derived. Realistic numerical examples indicate that the Acoustic-Gravity waves, which significantly precede the Tsunami, are expected to leave a measurable signature on bottom-pressure records that should be considered for early detection of Tsunami.

  6. Seasonal control skylight glazing panel with passive solar energy switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, J.V.

    1983-10-25

    A substantially transparent one-piece glazing panel is provided for generally horizontal mounting in a skylight. The panel is comprised of an repeated pattern of two alternating and contiguous linear optical elements; a first optical element being an upstanding generally right-triangular linear prism, and the second optical element being an upward-facing plano-cylindrical lens in which the planar surface is reflectively opaque and is generally in the same plane as the base of the triangular prism.

  7. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  8. Linear CCD attitude measurement system based on the identification of the auxiliary array CCD

    NASA Astrophysics Data System (ADS)

    Hu, Yinghui; Yuan, Feng; Li, Kai; Wang, Yan

    2015-10-01

    Object to the high precision flying target attitude measurement issues of a large space and large field of view, comparing existing measurement methods, the idea is proposed of using two array CCD to assist in identifying the three linear CCD with multi-cooperative target attitude measurement system, and to address the existing nonlinear system errors and calibration parameters and more problems with nine linear CCD spectroscopic test system of too complicated constraints among camera position caused by excessive. The mathematical model of binocular vision and three linear CCD test system are established, co-spot composition triangle utilize three red LED position light, three points' coordinates are given in advance by Cooperate Measuring Machine, the red LED in the composition of the three sides of a triangle adds three blue LED light points as an auxiliary, so that array CCD is easier to identify three red LED light points, and linear CCD camera is installed of a red filter to filter out the blue LED light points while reducing stray light. Using array CCD to measure the spot, identifying and calculating the spatial coordinates solutions of red LED light points, while utilizing linear CCD to measure three red LED spot for solving linear CCD test system, which can be drawn from 27 solution. Measured with array CCD coordinates auxiliary linear CCD has achieved spot identification, and has solved the difficult problems of multi-objective linear CCD identification. Unique combination of linear CCD imaging features, linear CCD special cylindrical lens system is developed using telecentric optical design, the energy center of the spot position in the depth range of convergence in the direction is perpendicular to the optical axis of the small changes ensuring highprecision image quality, and the entire test system improves spatial object attitude measurement speed and precision.

  9. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  11. Estimation of Standard Error of Regression Effects in Latent Regression Models Using Binder's Linearization. Research Report. ETS RR-07-09

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas

    2007-01-01

    Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…

  12. Parameter Recovery for the 1-P HGLLM with Non-Normally Distributed Level-3 Residuals

    ERIC Educational Resources Information Center

    Kara, Yusuf; Kamata, Akihito

    2017-01-01

    A multilevel Rasch model using a hierarchical generalized linear model is one approach to multilevel item response theory (IRT) modeling and is referred to as a one-parameter hierarchical generalized linear logistic model (1-P HGLLM). Although it has the flexibility to model nested structure of data with covariates, the model assumes the normality…

  13. Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.

    PubMed

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  14. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    PubMed Central

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786

  15. The Linear Bias in the Zeldovich Approximation and a Relation between the Number Density and the Linear Bias of Dark Halos

    NASA Astrophysics Data System (ADS)

    Fan, Zuhui

    2000-01-01

    The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.

  16. Application of General Regression Neural Network to the Prediction of LOD Change

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao

    2012-01-01

    Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.

  17. Linear parameter varying representations for nonlinear control design

    NASA Astrophysics Data System (ADS)

    Carter, Lance Huntington

    Linear parameter varying (LPV) systems are investigated as a framework for gain-scheduled control design and optimal hybrid control. An LPV system is defined as a linear system whose dynamics depend upon an a priori unknown but measurable exogenous parameter. A gain-scheduled autopilot design is presented for a bank-to-turn (BTT) missile. The method is novel in that the gain-scheduled design does not involve linearizations about operating points. Instead, the missile dynamics are brought to LPV form via a state transformation. This idea is applied to the design of a coupled longitudinal/lateral BTT missile autopilot. The pitch and yaw/roll dynamics are separately transformed to LPV form, where the cross axis states are treated as "exogenous" parameters. These are actually endogenous variables, so such a plant is called "quasi-LPV." Once in quasi-LPV form, a family of robust controllers using mu synthesis is designed for both the pitch and yaw/roll channels, using angle-of-attack and roll rate as the scheduling variables. The closed-loop time response is simulated using the original nonlinear model and also using perturbed aerodynamic coefficients. Modeling and control of engine idle speed is investigated using LPV methods. It is shown how generalized discrete nonlinear systems may be transformed into quasi-LPV form. A discrete nonlinear engine model is developed and expressed in quasi-LPV form with engine speed as the scheduling variable. An example control design is presented using linear quadratic methods. Simulations are shown comparing the LPV based controller performance to that using PID control. LPV representations are also shown to provide a setting for hybrid systems. A hybrid system is characterized by control inputs consisting of both analog signals and discrete actions. A solution is derived for the optimal control of hybrid systems with generalized cost functions. This is shown to be computationally intensive, so a suboptimal strategy is proposed that neglects a subset of possible parameter trajectories. A computational algorithm is constructed for this suboptimal solution applied to a class of linear non-quadratic cost functions.

  18. A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Leonov, Arkady I.

    2002-01-01

    The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.

  19. Comparison of craniofacial linear measurements of 20-40 year-old males and females using digital lateral cephalometric radiography in Indonesia

    NASA Astrophysics Data System (ADS)

    Aurizanti, D.; Suryonegoro, H.; Priaminiarti, M.

    2017-08-01

    Craniofacial characteristics are one of the sex determination parameters of age after puberty. The aim of this study is to obtain linear measurements using lateral cephalometric radiography of adults aged 20-40 years based on sex in Indonesia. Ten linear craniofacial parameters on 100 digital lateral cephalometric radiographs were measured. Inter-intra observer reliability was tested using Technical Error Measurement. The independent t-test and the Mann-Whitney U test were used to evaluate the significance of the findings. There are significant differences between males and females on 10 of the linear measurements using lateral cephalometric radiography. Lateral cephalometric radiography showed that the linear measurements of 10 cephalometic parameters were higher in males than females, so it can be used to determine sex.

  20. Integration of system identification and finite element modelling of nonlinear vibrating structures

    NASA Astrophysics Data System (ADS)

    Cooper, Samson B.; DiMaio, Dario; Ewins, David J.

    2018-03-01

    The Finite Element Method (FEM), Experimental modal analysis (EMA) and other linear analysis techniques have been established as reliable tools for the dynamic analysis of engineering structures. They are often used to provide solutions to small and large structures and other variety of cases in structural dynamics, even those exhibiting a certain degree of nonlinearity. Unfortunately, when the nonlinear effects are substantial or the accuracy of the predicted response is of vital importance, a linear finite element model will generally prove to be unsatisfactory. As a result, the validated linear FE model requires further enhancement so that it can represent and predict the nonlinear behaviour exhibited by the structure. In this paper, a pragmatic approach to integrating test-based system identification and FE modelling of a nonlinear structure is presented. This integration is based on three different phases: the first phase involves the derivation of an Underlying Linear Model (ULM) of the structure, the second phase includes experiment-based nonlinear identification using measured time series and the third phase covers augmenting the linear FE model and experimental validation of the nonlinear FE model. The proposed case study is demonstrated on a twin cantilever beam assembly coupled with a flexible arch shaped beam. In this case, polynomial-type nonlinearities are identified and validated with force-controlled stepped-sine test data at several excitation levels.

  1. Sources of variability in satellite-derived estimates of phytoplankton production in the eastern tropical Pacific

    NASA Technical Reports Server (NTRS)

    Banse, Karl; Yong, Marina

    1990-01-01

    As a proxy for satellite CZCS observations and concurrent measurements of primary production rates, data from 138 stations occupied seasonally during 1967-1968 in the offshore eastern tropical Pacific were analyzed in terms of six temporal groups and our current regimes. Multiple linear regressions on column production Pt show that simulated satellite pigment is generally weakly correlated, but sometimes not correlated with Pt, and that incident irradiance, sea surface temperature, nitrate, transparency, and depths of mixed layer or nitracline assume little or no importance. After a proxy for the light-saturated chlorophyll-specific photosynthetic rate P(max) is added, the coefficient of determination ranges from 0.55 to 0.91 (median of 0.85) for the 10 cases. In stepwise multiple linear regressions the P(max) proxy is the best predictor for Pt.

  2. The application of parameter estimation to flight measurements to obtain lateral-directional stability derivatives of an augmented jet-flap STOL airplane

    NASA Technical Reports Server (NTRS)

    Stephenson, J. D.

    1983-01-01

    Flight experiments with an augmented jet flap STOL aircraft provided data from which the lateral directional stability and control derivatives were calculated by applying a linear regression parameter estimation procedure. The tests, which were conducted with the jet flaps set at a 65 deg deflection, covered a large range of angles of attack and engine power settings. The effect of changing the angle of the jet thrust vector was also investigated. Test results are compared with stability derivatives that had been predicted. The roll damping derived from the tests was significantly larger than had been predicted, whereas the other derivatives were generally in agreement with the predictions. Results obtained using a maximum likelihood estimation procedure are compared with those from the linear regression solutions.

  3. Substance Use Disorder Counselors’ Job Performance and Turnover after 1 Year: Linear or Curvilinear Relationship?1

    PubMed Central

    Laschober, Tanja C.; de Tormes Eby, Lillian Turner

    2013-01-01

    The main goals of the current study were to investigate whether there are linear or curvilinear relationships between substance use disorder counselors’ job performance and actual turnover after 1 year utilizing four indicators of job performance and three turnover statuses (voluntary, involuntary, and no turnover as the reference group). Using longitudinal data from 440 matched counselor-clinical supervisor dyads, results indicate that overall, counselors with lower job performance are more likely to turn over voluntarily and involuntarily than not to turn over. Further, one of the job performance measures shows a significant curvilinear effect. We conclude that the negative consequences often assumed to be “caused” by counselor turnover may be overstated because those who leave both voluntarily and involuntarily demonstrate generally lower performance than those who remain employed at their treatment program. PMID:22527711

  4. Substance use disorder counselors' job performance and turnover after 1 year: linear or curvilinear relationship?

    PubMed

    Laschober, Tanja C; de Tormes Eby, Lillian Turner

    2013-07-01

    The main goals of the current study were to investigate whether there are linear or curvilinear relationships between substance use disorder counselors' job performance and actual turnover after 1 year utilizing four indicators of job performance and three turnover statuses (voluntary, involuntary, and no turnover as the reference group). Using longitudinal data from 440 matched counselor-clinical supervisor dyads, results indicate that overall, counselors with lower job performance are more likely to turn over voluntarily and involuntarily than not to turn over. Further, one of the job performance measures shows a significant curvilinear effect. We conclude that the negative consequences often assumed to be "caused" by counselor turnover may be overstated because those who leave both voluntarily and involuntarily demonstrate generally lower performance than those who remain employed at their treatment program.

  5. Steric hindrances create a discrete linear Dy4 complex exhibiting SMM behaviour.

    PubMed

    Lin, Shuang-Yan; Zhao, Lang; Ke, Hongshan; Guo, Yun-Nan; Tang, Jinkui; Guo, Yang; Dou, Jianmin

    2012-03-21

    Two linear tetranuclear lanthanide complexes of general formula [Ln(4)(L)(2)(C(6)H(5)COO)(12)(MeOH)(4)], where HL = 2,6-bis((furan-2-ylmethylimino)methyl)-4-methylphenol, () and Ln(III) = Dy(III) (1) and Gd(III) (2), have been synthesized and characterized. The crystal structural analysis demonstrates that two Schiff-base ligands inhibit the growth of benzoate bridged 1D chains, leading to the isolation of discrete tetranuclear complexes due to their steric hindrances. Every Ln(III) ion is coordinated by eight donor atoms in a distorted bicapped trigonal-prismatic arrangement. Alternating current (ac) susceptibility measurements of complex 1 reveal a frequency- and temperature-dependent out-of-phase signal under zero dc field, typical of single-molecule magnet (SMM) behaviour with an anisotropic barrier Δ(eff) = 17.2 K.

  6. Mars Crustal Remanent Magnetism: An Extinct Dynamo Leaves a Record of Field Reversals in the Heavily Cratered Highlands

    NASA Technical Reports Server (NTRS)

    Connerney, John E.; Acuna, Mario H.; Ness, Norman F.; Wasilewski, Peter J.

    1999-01-01

    The Mars Global Surveyor spacecraft, in a highly elliptical polar orbit about Mars, obtained vector magnetic field measurements just above the surface of Mars (altitudes > 100 kilometers). Crustal magnetization, largely confined to the most ancient, heavily cratered Mars highlands, is frequently organized in east-west trending linear features, the largest of which extends over 2000 km. A representative set of survey passes are modeled using uniformly magnetized thin plates and a generalized inverse methodology. Crustal remanent magnetization exceeds that deduced for the largest terrestrial magnetic anomalies by more than an order of magnitude. Groups of quasi-parallel linear features of alternating magnetic polarity are found. They are reminiscent of similar magnetic features associated with sea floor spreading and crustal genesis on Earth but with a much larger spatial scale.

  7. Discrete integration of continuous Kalman filtering equations for time invariant second-order structural systems

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Belvin, W. Keith

    1990-01-01

    A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.

  8. Second-order discrete Kalman filtering equations for control-structure interaction simulations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Belvin, W. Keith; Alvin, Kenneth F.

    1991-01-01

    A general form for the first-order representation of the continuous, second-order linear structural dynamics equations is introduced in order to derive a corresponding form of first-order Kalman filtering equations (KFE). Time integration of the resulting first-order KFE is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete KFE involving only symmetric, N x N solution matrix.

  9. Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.

    ERIC Educational Resources Information Center

    Alexopoulos, John; Abraham, Paul

    2001-01-01

    Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…

  10. Linearization instability for generic gravity in AdS spacetime

    NASA Astrophysics Data System (ADS)

    Altas, Emel; Tekin, Bayram

    2018-01-01

    In general relativity, perturbation theory about a background solution fails if the background spacetime has a Killing symmetry and a compact spacelike Cauchy surface. This failure, dubbed as linearization instability, shows itself as non-integrability of the perturbative infinitesimal deformation to a finite deformation of the background. Namely, the linearized field equations have spurious solutions which cannot be obtained from the linearization of exact solutions. In practice, one can show the failure of the linear perturbation theory by showing that a certain quadratic (integral) constraint on the linearized solutions is not satisfied. For non-compact Cauchy surfaces, the situation is different and for example, Minkowski space having a non-compact Cauchy surface, is linearization stable. Here we study, the linearization instability in generic metric theories of gravity where Einstein's theory is modified with additional curvature terms. We show that, unlike the case of general relativity, for modified theories even in the non-compact Cauchy surface cases, there are some theories which show linearization instability about their anti-de Sitter backgrounds. Recent D dimensional critical and three dimensional chiral gravity theories are two such examples. This observation sheds light on the paradoxical behavior of vanishing conserved charges (mass, angular momenta) for non-vacuum solutions, such as black holes, in these theories.

  11. Aortic dimensions in Turner syndrome.

    PubMed

    Quezada, Emilio; Lapidus, Jodi; Shaughnessy, Robin; Chen, Zunqiu; Silberbach, Michael

    2015-11-01

    In Turner syndrome, linear growth is less than the general population. Consequently, to assess stature in Turner syndrome, condition-specific comparators have been employed. Similar reference curves for cardiac structures in Turner syndrome are currently unavailable. Accurate assessment of the aorta is particularly critical in Turner syndrome because aortic dissection and rupture occur more frequently than in the general population. Furthermore, comparisons to references calculated from the taller general population with the shorter Turner syndrome population can lead to over-estimation of aortic size causing stigmatization, medicalization, and potentially over-treatment. We used echocardiography to measure aortic diameters at eight levels of the thoracic aorta in 481 healthy girls and women with Turner syndrome who ranged in age from two to seventy years. Univariate and multivariate linear regression analyses were performed to assess the influence of karyotype, age, body mass index, bicuspid aortic valve, blood pressure, history of renal disease, thyroid disease, or growth hormone therapy. Because only bicuspid aortic valve was found to independently affect aortic size, subjects with bicuspid aortic valve were excluded from the analysis. Regression equations for aortic diameters were calculated and Z-scores corresponding to 1, 2, and 3 standard deviations from the mean were plotted against body surface area. The information presented here will allow clinicians and other caregivers to calculate aortic Z-scores using a Turner-based reference population. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  12. Reducing Stress Among Mothers in Drug Treatment: A Description of a Mindfulness Based Parenting Intervention.

    PubMed

    Short, Vanessa L; Gannon, Meghan; Weingarten, Wendy; Kaltenbach, Karol; LaNoue, Marianna; Abatemarco, Diane J

    2017-06-01

    Background Parenting women with substance use disorder could potentially benefit from interventions designed to decrease stress and improve overall psychosocial health. In this study we assessed whether a mindfulness based parenting (MBP) intervention could be successful in decreasing general and parenting stress in a population of women who are in treatment for substance use disorder and who have infants or young children. Methods MBP participants (N = 59) attended a two-hour session once a week for 12 weeks. Within-group differences on stress outcome measures administered prior to the beginning of the MBP intervention and following the intervention period were investigated using mixed-effects linear regression models accounting for correlations arising from the repeated-measures. Scales assessed for pre-post change included the Perceived Stress Scale-10 (PSS) and the Parenting Stress Index-Short Form (PSI). Results General stress, as measured by the PSS, decreased significantly from baseline to post-intervention. Women with the highest baseline general stress level experienced the greatest change in total stress score. A significant change also occurred across the Parental Distress PSI subscale. Conclusions Findings from this innovative interventional study suggest that the addition of MBP within treatment programs for parenting women with substance use disorder is an effective strategy for reducing stress within this at risk population.

  13. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  14. Differential Dynamic Engagement within 24 SH3 Domain: Peptide Complexes Revealed by Co-Linear Chemical Shift Perturbation Analysis

    PubMed Central

    Stollar, Elliott J.; Lin, Hong; Davidson, Alan R.; Forman-Kay, Julie D.

    2012-01-01

    There is increasing evidence for the functional importance of multiple dynamically populated states within single proteins. However, peptide binding by protein-protein interaction domains, such as the SH3 domain, has generally been considered to involve the full engagement of peptide to the binding surface with minimal dynamics and simple methods to determine dynamics at the binding surface for multiple related complexes have not been described. We have used NMR spectroscopy combined with isothermal titration calorimetry to comprehensively examine the extent of engagement to the yeast Abp1p SH3 domain for 24 different peptides. Over one quarter of the domain residues display co-linear chemical shift perturbation (CCSP) behavior, in which the position of a given chemical shift in a complex is co-linear with the same chemical shift in the other complexes, providing evidence that each complex exists as a unique dynamic rapidly inter-converting ensemble. The extent the specificity determining sub-surface of AbpSH3 is engaged as judged by CCSP analysis correlates with structural and thermodynamic measurements as well as with functional data, revealing the basis for significant structural and functional diversity amongst the related complexes. Thus, CCSP analysis can distinguish peptide complexes that may appear identical in terms of general structure and percent peptide occupancy but have significant local binding differences across the interface, affecting their ability to transmit conformational change across the domain and resulting in functional differences. PMID:23251481

  15. The Development of a Dual-Warhead Impact System for Dynamic Linearity Measurement of a High-g Micro-Electro-Mechanical-Systems (MEMS) Accelerometer

    PubMed Central

    Shi, Yunbo; Yang, Zhicai; Ma, Zongmin; Cao, Huiliang; Kou, Zhiwei; Zhi, Dan; Chen, Yanxiang; Feng, Hengzhen; Liu, Jun

    2016-01-01

    Despite its extreme significance, dynamic linearity measurement for high-g accelerometers has not been discussed experimentally in previous research. In this study, we developed a novel method using a dual-warhead Hopkinson bar to measure the dynamic linearity of a high-g acceleration sensor with a laser interference impact experiment. First, we theoretically determined that dynamic linearity is a performance indicator that can be used to assess the quality merits of high-g accelerometers and is the basis of the frequency response. We also found that the dynamic linearity of the dual-warhead Hopkinson bar without an accelerometer is 2.5% experimentally. Further, we verify that dynamic linearity of the accelerometer is 3.88% after calibrating the Hopkinson bar with the accelerometer. The results confirm the reliability and feasibility of measuring dynamic linearity for high-g accelerometers using this method. PMID:27338383

  16. Theory of attosecond delays in molecular photoionization.

    PubMed

    Baykusheva, Denitsa; Wörner, Hans Jakob

    2017-03-28

    We present a theoretical formalism for the calculation of attosecond delays in molecular photoionization. It is shown how delays relevant to one-photon-ionization, also known as Eisenbud-Wigner-Smith delays, can be obtained from the complex dipole matrix elements provided by molecular quantum scattering theory. These results are used to derive formulae for the delays measured by two-photon attosecond interferometry based on an attosecond pulse train and a dressing femtosecond infrared pulse. These effective delays are first expressed in the molecular frame where maximal information about the molecular photoionization dynamics is available. The effects of averaging over the emission direction of the electron and the molecular orientation are introduced analytically. We illustrate this general formalism for the case of two polyatomic molecules. N 2 O serves as an example of a polar linear molecule characterized by complex photoionization dynamics resulting from the presence of molecular shape resonances. H 2 O illustrates the case of a non-linear molecule with comparably simple photoionization dynamics resulting from a flat continuum. Our theory establishes the foundation for interpreting measurements of the photoionization dynamics of all molecules by attosecond metrology.

  17. Implementation and characterization of active feed-forward for deterministic linear optics quantum computing

    NASA Astrophysics Data System (ADS)

    Böhi, P.; Prevedel, R.; Jennewein, T.; Stefanov, A.; Tiefenbacher, F.; Zeilinger, A.

    2007-12-01

    In general, quantum computer architectures which are based on the dynamical evolution of quantum states, also require the processing of classical information, obtained by measurements of the actual qubits that make up the computer. This classical processing involves fast, active adaptation of subsequent measurements and real-time error correction (feed-forward), so that quantum gates and algorithms can be executed in a deterministic and hence error-free fashion. This is also true in the linear optical regime, where the quantum information is stored in the polarization state of photons. The adaptation of the photon’s polarization can be achieved in a very fast manner by employing electro-optical modulators, which change the polarization of a trespassing photon upon appliance of a high voltage. In this paper we discuss techniques for implementing fast, active feed-forward at the single photon level and we present their application in the context of photonic quantum computing. This includes the working principles and the characterization of the EOMs as well as a description of the switching logics, both of which allow quantum computation at an unprecedented speed.

  18. The Detection of Radiated Modes from Ducted Fan Engines

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Nark, Douglas M.; Thomas, Russell H.

    2001-01-01

    The bypass duct of an aircraft engine is a low-pass filter allowing some spinning modes to radiate outside the duct. The knowledge of the radiated modes can help in noise reduction, as well as the diagnosis of noise generation mechanisms inside the duct. We propose a nonintrusive technique using a circular microphone array outside the engine measuring the complex noise spectrum on an arc of a circle. The array is placed at various axial distances from the inlet or the exhaust of the engine. Using a model of noise radiation from the duct, an overdetermined system of linear equations is constructed for the complex amplitudes of the radial modes for a fixed circumferential mode. This system of linear equations is generally singular, indicating that the problem is illposed. Tikhonov regularization is employed to solve this system of equations for the unknown amplitudes of the radiated modes. An application of our mode detection technique using measured acoustic data from a circular microphone array is presented. We show that this technique can reliably detect radiated modes with the possible exception of modes very close to cut-off.

  19. Electron kinetic effects on interferometry, polarimetry and Thomson scattering measurements in burning plasmas (invited).

    PubMed

    Mirnov, V V; Brower, D L; Den Hartog, D J; Ding, W X; Duff, J; Parke, E

    2014-11-01

    At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = Te/mec(2) model may be insufficient; we present a more precise model with τ(2)-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of Te measurement relevant to ITER operational scenarios.

  20. Selective Catalytic Combustion Sensors for Reactive Organic Analysis

    NASA Technical Reports Server (NTRS)

    Innes, W. B.

    1971-01-01

    Sensors involving a vanadia-alumina catalyst bed-thermocouple assembly satisfy requirements for simple, reproducible and rapid continuous analysis or reactive organics. Responses generally increase with temperature to 400 C and increase to a maximum with flow rate/catalyst volume. Selectivity decreases with temperature. Response time decreases with flow rate and increases with catalyst volume. At chosen optimum conditions calculated response which is additive and linear agrees better with photochemical reactivity than other methods for various automotive sources, and response to vehicle exhaust is insensitive to flow rate. Application to measurement of total reactive organics in vehicle exhaust as well as for gas chromatography detection illustrate utility. The approach appears generally applicable to high thermal effect reactions involving first order kinetics.

  1. Training on the DSM-5 Cultural Formulation Interview Improves Cultural Competence in General Psychiatry Residents: A Multi-site Study.

    PubMed

    Mills, Stacia; Wolitzky-Taylor, Kate; Xiao, Anna Q; Bourque, Marie Claire; Rojas, Sandra M Peynado; Bhattacharya, Debanjana; Simpson, Annabelle K; Maye, Aleea; Lo, Pachida; Clark, Aaron; Lim, Russell; Lu, Francis G

    2016-10-01

    The authors assessed whether a 1-h didactic session on the DSM-5 Cultural Formulation Interview (CFI) improves cultural competence of general psychiatry residents. Psychiatry residents at six residency programs completed demographics and pre-intervention questionnaires, were exposed to a 1-h session on the CFI, and completed a post-intervention questionnaire. Repeated measures ANCOVA compared pre- to post-intervention change. Linear regression assessed whether previous cultural experience predicted post-intervention scores. Mean scores on the questionnaire significantly changed from pre- to post-intervention (p < 0.001). Previous cultural experience did not predict post-intervention scores. Psychiatry residents' cultural competence scores improved with a 1-h session on the CFI but with notable limitations.

  2. Analysis of lightning field changes produced by Florida thunderstorms

    NASA Technical Reports Server (NTRS)

    Koshak, William John

    1991-01-01

    A new method is introduced for inferring the charges deposited in a lightning flash. Lightning-caused field changes (delta E's) are described by a more general volume charge distribution than is defined on a large cartesian grid system centered above the measuring networks. It is shown that a linear system of equations can be used to relate delta E's at the ground to the values of charge on this grid. It is possible to apply more general physical constraints to the charge solutions, and it is possible to access the information content of the delta E data. Computer-simulated delta E inversions show that the location and symmetry of the charge retrievals are usually consistent with the known test sources.

  3. The action uncertainty principle for continuous measurements

    NASA Astrophysics Data System (ADS)

    Mensky, Michael B.

    1996-02-01

    The action uncertainty principle (AUP) for the specification of the most probable readouts of continuous quantum measurements is proved, formulated in different forms and analyzed (for nonlinear as well as linear systems). Continuous monitoring of an observable A(p,q,t) with resolution Δa( t) is considered. The influence of the measurement process on the evolution of the measured system (quantum measurement noise) is presented by an additional term δ F(t)A(p,q,t) in the Hamiltonian where the function δ F (generalized fictitious force) is restricted by the AUP ∫|δ F(t)| Δa( t) d t ≲ and arbitrary otherwise. Quantum-nondemolition (QND) measurements are analyzed with the help of the AUP. A simple uncertainty relation for continuous quantum measurements is derived. It states that the area of a certain band in the phase space should be of the order of. The width of the band depends on the measurement resolution while its length is determined by the deviation of the system, due to the measurement, from classical behavior.

  4. Study on sampling of continuous linear system based on generalized Fourier transform

    NASA Astrophysics Data System (ADS)

    Li, Huiguang

    2003-09-01

    In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.

  5. Comparison of Saffron and Fluvoxamine in the Treatment of Mild to Moderate Obsessive-Compulsive Disorder: A Double Blind Randomized Clinical Trial

    PubMed Central

    Esalatmanesh, Sophia; Biuseh, Mojtaba; Noorbala, Ahmad Ali; Mostafavi, Seyed-Ali; Rezaei, Farzin; Mesgarpour, Bita; Mohammadinejad, Payam; Akhondzadeh, Shahin

    2017-01-01

    Objective: There are different pathophysiological mechanisms for obsessive- compulsive disorder (OCD) as suggested by the serotonergic, dopaminergic, and glutamatergic hypotheses. The present study aimed at comparing the efficacy and safety of saffron (stigma of Crocus sativus) and fluvoxamine in the treatment of mild to moderate obsessive- compulsive disorder. Method: In this study, 50 males and females, aged 18 to 60 years, with mild to moderate OCD, participated. The patients were randomly assigned to receive either saffron (30 mg/day, 15 mg twice a day) or fluvoxamine (100 mg/day) for 10 weeks. Using the Yale-Brown Obsessive Compulsive Scale (Y-BOCS) and the Adverse Event Checklist, we assessed the patients at baseline, and at the second, fourth, sixth, eighth, and tenth week. Finally, the data were analyzed using general linear repeated measures. Results: In this study, 46 patients completed the trial. General linear repeated measures demonstrated no significant effect for time-treatment interaction on the Y-BOCS total scores [F (2.42, 106.87) = 0.70, P = 0.52], obsession Y-BOCS subscale scores [F (2.47, 108.87) = 0.77, p = 0.49], and compulsion Y-BOCS subscale scores [F (2.18, 96.06) = 0.25, P = 0.79]. Frequency of adverse events was not significantly different between the 2 groups. Conclusion: Our findings suggest that saffron is as effective as fluvoxamine in the treatment of patients with mild to moderate OCD. PMID:29062366

  6. Linear transforms for Fourier data on the sphere: application to high angular resolution diffusion MRI of the brain.

    PubMed

    Haldar, Justin P; Leahy, Richard M

    2013-05-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. A methodology for evaluation of parent-mutant competition using a generalized non-linear ecosystem model

    Treesearch

    Raymond L. Czaplewski

    1973-01-01

    A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...

  8. Thermal Density Functional Theory: Time-Dependent Linear Response and Approximate Functionals from the Fluctuation-Dissipation Theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron

    We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.

  9. Symposium on General Linear Model Approach to the Analysis of Experimental Data in Educational Research (Athens, Georgia, June 29-July 1, 1967). Final Report.

    ERIC Educational Resources Information Center

    Bashaw, W. L., Ed.; Findley, Warren G., Ed.

    This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…

  10. Thermal Density Functional Theory: Time-Dependent Linear Response and Approximate Functionals from the Fluctuation-Dissipation Theorem

    DOE PAGES

    Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron

    2016-06-08

    We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.

  11. Elastic metamaterial beam with remotely tunable stiffness

    NASA Astrophysics Data System (ADS)

    Qian, Wei; Yu, Zhengyue; Wang, Xiaole; Lai, Yun; Yellen, Benjamin B.

    2016-02-01

    We demonstrate a dynamically tunable elastic metamaterial, which employs remote magnetic force to adjust its vibration absorption properties. The 1D metamaterial is constructed from a flat aluminum beam milled with a linear array of cylindrical holes. The beam is backed by a thin elastic membrane, on which thin disk-shaped permanent magnets are mounted. When excited by a shaker, the beam motion is tracked by a Laser Doppler Vibrometer, which conducts point by point scanning of the vibrating element. Elastic waves are unable to propagate through the beam when the driving frequency excites the first elastic bending mode in the unit cell. At these frequencies, the effective mass density of the unit cell becomes negative, which induces an exponentially decaying evanescent wave. Due to the non-linear elastic properties of the membrane, the effective stiffness of the unit cell can be tuned with an external magnetic force from nearby solenoids. Measurements of the linear and cubic static stiffness terms of the membrane are in excellent agreement with experimental measurements of the bandgap shift as a function of the applied force. In this implementation, bandgap shifts by as much as 40% can be achieved with ˜30 mN of applied magnetic force. This structure has potential for extension in 2D and 3D, providing a general approach for building dynamically tunable elastic metamaterials for applications in lensing and guiding elastic waves.

  12. Accounting for the decrease of photosystem photochemical efficiency with increasing irradiance to estimate quantum yield of leaf photosynthesis.

    PubMed

    Yin, Xinyou; Belay, Daniel W; van der Putten, Peter E L; Struik, Paul C

    2014-12-01

    Maximum quantum yield for leaf CO2 assimilation under limiting light conditions (Φ CO2LL) is commonly estimated as the slope of the linear regression of net photosynthetic rate against absorbed irradiance over a range of low-irradiance conditions. Methodological errors associated with this estimation have often been attributed either to light absorptance by non-photosynthetic pigments or to some data points being beyond the linear range of the irradiance response, both causing an underestimation of Φ CO2LL. We demonstrate here that a decrease in photosystem (PS) photochemical efficiency with increasing irradiance, even at very low levels, is another source of error that causes a systematic underestimation of Φ CO2LL. A model method accounting for this error was developed, and was used to estimate Φ CO2LL from simultaneous measurements of gas exchange and chlorophyll fluorescence on leaves using various combinations of species, CO2, O2, or leaf temperature levels. The conventional linear regression method under-estimated Φ CO2LL by ca. 10-15%. Differences in the estimated Φ CO2LL among measurement conditions were generally accounted for by different levels of photorespiration as described by the Farquhar-von Caemmerer-Berry model. However, our data revealed that the temperature dependence of PSII photochemical efficiency under low light was an additional factor that should be accounted for in the model.

  13. Information-geometric measures estimate neural interactions during oscillatory brain states

    PubMed Central

    Nie, Yimin; Fellous, Jean-Marc; Tatsuno, Masami

    2014-01-01

    The characterization of functional network structures among multiple neurons is essential to understanding neural information processing. Information geometry (IG), a theory developed for investigating a space of probability distributions has recently been applied to spike-train analysis and has provided robust estimations of neural interactions. Although neural firing in the equilibrium state is often assumed in these studies, in reality, neural activity is non-stationary. The brain exhibits various oscillations depending on cognitive demands or when an animal is asleep. Therefore, the investigation of the IG measures during oscillatory network states is important for testing how the IG method can be applied to real neural data. Using model networks of binary neurons or more realistic spiking neurons, we studied how the single- and pairwise-IG measures were influenced by oscillatory neural activity. Two general oscillatory mechanisms, externally driven oscillations and internally induced oscillations, were considered. In both mechanisms, we found that the single-IG measure was linearly related to the magnitude of the external input, and that the pairwise-IG measure was linearly related to the sum of connection strengths between two neurons. We also observed that the pairwise-IG measure was not dependent on the oscillation frequency. These results are consistent with the previous findings that were obtained under the equilibrium conditions. Therefore, we demonstrate that the IG method provides useful insights into neural interactions under the oscillatory condition that can often be observed in the real brain. PMID:24605089

  14. Information-geometric measures estimate neural interactions during oscillatory brain states.

    PubMed

    Nie, Yimin; Fellous, Jean-Marc; Tatsuno, Masami

    2014-01-01

    The characterization of functional network structures among multiple neurons is essential to understanding neural information processing. Information geometry (IG), a theory developed for investigating a space of probability distributions has recently been applied to spike-train analysis and has provided robust estimations of neural interactions. Although neural firing in the equilibrium state is often assumed in these studies, in reality, neural activity is non-stationary. The brain exhibits various oscillations depending on cognitive demands or when an animal is asleep. Therefore, the investigation of the IG measures during oscillatory network states is important for testing how the IG method can be applied to real neural data. Using model networks of binary neurons or more realistic spiking neurons, we studied how the single- and pairwise-IG measures were influenced by oscillatory neural activity. Two general oscillatory mechanisms, externally driven oscillations and internally induced oscillations, were considered. In both mechanisms, we found that the single-IG measure was linearly related to the magnitude of the external input, and that the pairwise-IG measure was linearly related to the sum of connection strengths between two neurons. We also observed that the pairwise-IG measure was not dependent on the oscillation frequency. These results are consistent with the previous findings that were obtained under the equilibrium conditions. Therefore, we demonstrate that the IG method provides useful insights into neural interactions under the oscillatory condition that can often be observed in the real brain.

  15. Linear chirped slope profile for spatial calibration in slope measuring deflectometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siewert, F., E-mail: frank.siewert@helmholtz-berlin.de; Zeschke, T.; Arnold, T.

    2016-05-15

    Slope measuring deflectometry is commonly used by the X-ray optics community to measure the long-spatial-wavelength surface figure error of optical components dedicated to guide and focus X-rays under grazing incidence condition at synchrotron and free electron laser beamlines. The best performing instruments of this kind are capable of absolute accuracy on the level of 30-50 nrad. However, the exact bandwidth of the measurements, determined at the higher spatial frequencies by the instrument’s spatial resolution, or more generally by the instrument’s modulation transfer function (MTF) is hard to determine. An MTF calibration method based on application of a test surface withmore » a one-dimensional (1D) chirped height profile of constant amplitude was suggested in the past. In this work, we propose a new approach to designing the test surfaces with a 2D-chirped topography, specially optimized for MTF characterization of slope measuring instruments. The design of the developed MTF test samples based on the proposed linear chirped slope profiles (LCSPs) is free of the major drawback of the 1D chirped height profiles, where in the slope domain, the amplitude strongly increases with the local spatial frequency of the profile. We provide the details of fabrication of the LCSP samples. The results of first application of the developed test samples to measure the spatial resolution of the BESSY-NOM at different experimental arrangements are also presented and discussed.« less

  16. Artificial Intelligence Methodologies in Flight Related Differential Game, Control and Optimization Problems

    DTIC Science & Technology

    1993-01-31

    28 Controllability and Observability ............................. .32 ’ Separation of Learning and Control ... ... 37 Linearization via... Linearization via Transformation of Coordinates and Nonlinear Fedlback . .1 Main Result ......... .............................. 13 Discussion...9 2.1 Basic Structure of a NLM........................ .󈧟 2.2 General Structure of NNLM .......................... .28 2.3 Linear System

  17. A Constrained Linear Estimator for Multiple Regression

    ERIC Educational Resources Information Center

    Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.

    2010-01-01

    "Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…

  18. Spike-train spectra and network response functions for non-linear integrate-and-fire neurons.

    PubMed

    Richardson, Magnus J E

    2008-11-01

    Reduced models have long been used as a tool for the analysis of the complex activity taking place in neurons and their coupled networks. Recent advances in experimental and theoretical techniques have further demonstrated the usefulness of this approach. Despite the often gross simplification of the underlying biophysical properties, reduced models can still present significant difficulties in their analysis, with the majority of exact and perturbative results available only for the leaky integrate-and-fire model. Here an elementary numerical scheme is demonstrated which can be used to calculate a number of biologically important properties of the general class of non-linear integrate-and-fire models. Exact results for the first-passage-time density and spike-train spectrum are derived, as well as the linear response properties and emergent states of recurrent networks. Given that the exponential integrate-fire model has recently been shown to agree closely with the experimentally measured response of pyramidal cells, the methodology presented here promises to provide a convenient tool to facilitate the analysis of cortical-network dynamics.

  19. Prosthetic Leg Control in the Nullspace of Human Interaction.

    PubMed

    Gregg, Robert D; Martin, Anne E

    2016-07-01

    Recent work has extended the control method of virtual constraints, originally developed for autonomous walking robots, to powered prosthetic legs for lower-limb amputees. Virtual constraints define desired joint patterns as functions of a mechanical phasing variable, which are typically enforced by torque control laws that linearize the output dynamics associated with the virtual constraints. However, the output dynamics of a powered prosthetic leg generally depend on the human interaction forces, which must be measured and canceled by the feedback linearizing control law. This feedback requires expensive multi-axis load cells, and actively canceling the interaction forces may minimize the human's influence over the prosthesis. To address these limitations, this paper proposes a method for projecting virtual constraints into the nullspace of the human interaction terms in the output dynamics. The projected virtual constraints naturally render the output dynamics invariant with respect to the human interaction forces, which instead enter into the internal dynamics of the partially linearized prosthetic system. This method is illustrated with simulations of a transfemoral amputee model walking with a powered knee-ankle prosthesis that is controlled via virtual constraints with and without the proposed projection.

  20. Removing an intersubject variance component in a general linear model improves multiway factoring of event-related spectral perturbations in group EEG studies.

    PubMed

    Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C

    2013-03-01

    Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.

  1. Bayesian Inference for Generalized Linear Models for Spiking Neurons

    PubMed Central

    Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias

    2010-01-01

    Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627

  2. Laboratory measurements of upwelled radiance and reflectance spectra of Calvert, Ball, Jordan, and Feldspar soil sediments

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Usry, J. W.; Witte, W. G.; Gurganus, E. A.

    1977-01-01

    An effort to investigate the potential of remote sensing for monitoring nonpoint source pollution was conducted. Spectral reflectance characteristics for four types of soil sediments were measured for mixture concentrations between 4 and 173 ppm. For measurements at a spectral resolution of 32 mm, the spectral reflectances of Calvert, Ball, Jordan, and Feldspar soil sediments were distinctly different over the wavelength range from 400 to 980 nm at each concentration tested. At high concentrations, spectral differences between the various sediments could be detected by measurements with a spectral resolution of 160 nm. At a low concentration, only small differences were observed between the various sediments when measurements were made with 160 nm spectral resolution. Radiance levels generally varied in a nonlinear manner with sediment concentration; linearity occurred in special cases, depending on sediment type, concentration range, and wavelength.

  3. Physical activity measurement in older adults: relationships with mental health.

    PubMed

    Parker, Sarah J; Strath, Scott J; Swartz, Ann M

    2008-10-01

    This study examined the relationship between physical activity (PA) and mental health among older adults as measured by objective and subjective PA-assessment instruments. Pedometers (PED), accelerometers (ACC), and the Physical Activity Scale for the Elderly (PASE) were administered to measure 1 week of PA among 84 adults age 55-87 (mean = 71) years. General mental health was measured using the Positive and Negative Affect Scale (PANAS) and the Satisfaction With Life Scale (SWL). Linear regressions revealed that PA estimated by PED significantly predicted 18.1%, 8.3%, and 12.3% of variance in SWL and positive and negative affect, respectively, whereas PA estimated by the PASE did not predict any mental health variables. Results from ACC data were mixed. Hotelling-William tests between correlation coefficients revealed that the relationship between PED and SWL was significantly stronger than the relationship between PASE and SWL. Relationships between PA and mental health might depend on the PA measure used.

  4. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.

    PubMed

    Elad, M; Feuer, A

    1997-01-01

    The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

  5. Measuring Household Vulnerability: A Fuzzy Approach

    NASA Astrophysics Data System (ADS)

    Sethi, G.; Pierce, S. A.

    2016-12-01

    This research develops an index of vulnerability for Ugandan households using a variety of economic, social and environmental variables with two objectives. First, there is only a small body of research that measures household vulnerability. Given the stresses faced by households susceptible to water, environment, food, livelihood, energy, and health security concerns, it is critical that they be identified in order to make effective policy. We draw on the socio-ecological systems (SES) framework described by Ostrom (2009) and adapt the model developed by from Giupponi, Giove, and Giannini (2013) to develop a composite measure. Second, most indices in the literature are linear in nature, relying on simple weighted averages. In this research, we contrast the results obtained by a simple weighted average with those obtained by using the Choquet integral. The Choquet integral is a fuzzy measure, and is based on the generalization of the Lebesgue integral. Due to its non-additive nature, the Choquet integral offers a more general approach. Our results reveal that all households included in this study are highly vulnerable, and that vulnerability scores obtained by the fuzzy approach are significantly different from those obtained by using the simple weighted average (p = 9.46e-160).

  6. Univariate normalization of bispectrum using Hölder's inequality.

    PubMed

    Shahbazi, Forooz; Ewald, Arne; Nolte, Guido

    2014-08-15

    Considering that many biological systems including the brain are complex non-linear systems, suitable methods capable of detecting these non-linearities are required to study the dynamical properties of these systems. One of these tools is the third order cummulant or cross-bispectrum, which is a measure of interfrequency interactions between three signals. For convenient interpretation, interaction measures are most commonly normalized to be independent of constant scales of the signals such that its absolute values are bounded by one, with this limit reflecting perfect coupling. Although many different normalization factors for cross-bispectra were suggested in the literature these either do not lead to bounded measures or are themselves dependent on the coupling and not only on the scale of the signals. In this paper we suggest a normalization factor which is univariate, i.e., dependent only on the amplitude of each signal and not on the interactions between signals. Using a generalization of Hölder's inequality it is proven that the absolute value of this univariate bicoherence is bounded by zero and one. We compared three widely used normalizations to the univariate normalization concerning the significance of bicoherence values gained from resampling tests. Bicoherence values are calculated from real EEG data recorded in an eyes closed experiment from 10 subjects. The results show slightly more significant values for the univariate normalization but in general, the differences are very small or even vanishing in some subjects. Therefore, we conclude that the normalization factor does not play an important role in the bicoherence values with regard to statistical power, although a univariate normalization is the only normalization factor which fulfills all the required conditions of a proper normalization. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Statistical Modeling of Fire Occurrence Using Data from the Tōhoku, Japan Earthquake and Tsunami.

    PubMed

    Anderson, Dana; Davidson, Rachel A; Himoto, Keisuke; Scawthorn, Charles

    2016-02-01

    In this article, we develop statistical models to predict the number and geographic distribution of fires caused by earthquake ground motion and tsunami inundation in Japan. Using new, uniquely large, and consistent data sets from the 2011 Tōhoku earthquake and tsunami, we fitted three types of models-generalized linear models (GLMs), generalized additive models (GAMs), and boosted regression trees (BRTs). This is the first time the latter two have been used in this application. A simple conceptual framework guided identification of candidate covariates. Models were then compared based on their out-of-sample predictive power, goodness of fit to the data, ease of implementation, and relative importance of the framework concepts. For the ground motion data set, we recommend a Poisson GAM; for the tsunami data set, a negative binomial (NB) GLM or NB GAM. The best models generate out-of-sample predictions of the total number of ignitions in the region within one or two. Prefecture-level prediction errors average approximately three. All models demonstrate predictive power far superior to four from the literature that were also tested. A nonlinear relationship is apparent between ignitions and ground motion, so for GLMs, which assume a linear response-covariate relationship, instrumental intensity was the preferred ground motion covariate because it captures part of that nonlinearity. Measures of commercial exposure were preferred over measures of residential exposure for both ground motion and tsunami ignition models. This may vary in other regions, but nevertheless highlights the value of testing alternative measures for each concept. Models with the best predictive power included two or three covariates. © 2015 Society for Risk Analysis.

  8. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.

    PubMed

    Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D

    2011-08-01

    The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  9. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    PubMed Central

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.

    2011-01-01

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49–33.03 mm Al on a computed tomography (CT) scanner, 0.09–1.93 mm Al on two mammography systems, and 0.1–0.45 mm Cu and 0.49–14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and∕or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry). PMID:21928626

  10. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen

    2011-08-15

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87more » mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R{sup 2} > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).« less

  11. Association of Breast Feeding and Birth Weight with Anthropometric Measures and Blood Pressure in Children and Adolescents: The CASPIAN-IV Study.

    PubMed

    Djalalinia, Shirin; Qorbani, Mostafa; Heshmat, Ramin; Motlagh, Mohammad Esmaeil; Ardalan, Gelayol; Bazyar, Nima; Taheri, Majzoubeh; Asayesh, Hamid; Kelishadi, Roya

    2015-10-01

    Noncommunicable diseases (NCDs) and their risk factors are major health threats especially for developing countries. The aim of this study was to assess the association between breast feeding (BF) and birth weight (BW) with anthropometric measures and blood pressure (BP) in a nationally-representative sample of Iranian children and adolescents. In this national survey, 14,880 children and adolescents, aged 6-18 years, were selected using a multistage, cluster sampling method from rural and urban areas of 30 provinces of Iran. BF duration and BW were assessed by validated questionnaires completed by parents. The study participants were 13,486 students (participation rate of 90.6%). They consisted of 49.24% girls, 75.6% urban residents, with a mean age of 12.5 years (95% confidence interval: 12.3-12.6). The family history of obesity had a significant association with BW (p < 0.001). A substantial association was found between BF duration and the order of children in the family, both in boys (p < 0.001) and girls (p < 0.001). The mean values for height, weight, body mass index, as well as waist, wrist, and hip circumferences were higher in those with higher BW categories (p for trend < 0.001). As BW increased, there was a linear decrease in underweight (p for trend < 0.001) and a linear increase in the prevalence of generalized obesity (p for trend < 0.001) was documented. BW was associated with a higher prevalence in general obesity and a lower prevalence of being underweight. Duration of BF had no significant association with anthropometric measures and BP. Future longitudinal studies are necessary to determine the clinical implications of these findings. Copyright © 2015. Published by Elsevier B.V.

  12. Quality measurement in the shunt treatment of hydrocephalus: analysis and risk adjustment of the Revision Quotient.

    PubMed

    Piatt, Joseph H; Freibott, Christina E

    2014-07-01

    OBJECT.: The Revision Quotient (RQ) has been defined as the ratio of the number of CSF shunt revisions to the number of new shunt insertions for a particular neurosurgical practice in a unit of time. The RQ has been proposed as a quality measure in the treatment of childhood hydrocephalus. The authors examined the construct validity of the RQ and explored the feasibility of risk stratification under this metric. The Kids' Inpatient Database for 1997, 2000, 2003, 2006, and 2009 was queried for admissions with diagnostic codes for hydrocephalus and procedural codes for CSF shunt insertion or revision. Revision quotients were calculated for hospitals that performed 12 or more shunt insertions annually. The univariate associations of hospital RQs with a variety of institutional descriptors were analyzed, and a generalized linear model of the RQ was constructed. There were 12,244 admissions (34%) during which new shunts were inserted, and there were 23,349 admissions (66%) for shunt revision. Three hundred thirty-four annual RQs were calculated for 152 different hospitals. Analysis of variance in hospital RQs over the 5 years of study data supports the construct validity of the metric. The following factors were incorporated into a generalized linear model that accounted for 41% of the variance of the measured RQs: degree of pediatric specialization, proportion of initial case mix in the infant age group, and proportion with neoplastic hydrocephalus. The RQ has construct validity. Risk adjustment is feasible, but the risk factors that were identified relate predominantly to patterns of patient flow through the health care system. Possible advantages of an alternative metric, the Surgical Activity Ratio, are discussed.

  13. Roll Damping Derivatives from Generalized Lifting-Surface Theory and Wind Tunnel Forced-Oscillation Tests

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S; Murphy, Patrick C.

    2014-01-01

    Improving aerodynamic models for adverse loss-of-control conditions in flight is an area being researched under the NASA Aviation Safety Program. Aerodynamic models appropriate for loss of control conditions require a more general mathematical representation to predict nonlinear unsteady behaviors. As more general aerodynamic models are studied that include nonlinear higher order effects, the possibility of measurements that confound aerodynamic and structural responses are probable. In this study an initial step is taken to look at including structural flexibility in analysis of rigid-body forced-oscillation testing that accounts for dynamic rig, sting and balance flexibility. Because of the significant testing required and associated costs in a general study, it makes sense to capitalize on low cost analytical methods where possible, especially where structural flexibility can be accounted for by a low cost method. This paper provides an initial look at using linear lifting surface theory applied to rigid-body aircraft roll forced-oscillation tests.

  14. Generalized quantum no-go theorems of pure states

    NASA Astrophysics Data System (ADS)

    Li, Hui-Ran; Luo, Ming-Xing; Lai, Hong

    2018-07-01

    Various results of the no-cloning theorem, no-deleting theorem and no-superposing theorem in quantum mechanics have been proved using the superposition principle and the linearity of quantum operations. In this paper, we investigate general transformations forbidden by quantum mechanics in order to unify these theorems. First, we prove that any useful information cannot be created from an unknown pure state which is randomly chosen from a Hilbert space according to the Harr measure. And then, we propose a unified no-go theorem based on a generalized no-superposing result. The new theorem includes the no-cloning theorem, no-anticloning theorem, no-partial-erasure theorem, no-splitting theorem, no-superposing theorem or no-encoding theorem as a special case. Moreover, it implies various new results. Third, we extend the new theorem into another form that includes the no-deleting theorem as a special case.

  15. Query construction, entropy, and generalization in neural-network models

    NASA Astrophysics Data System (ADS)

    Sollich, Peter

    1994-05-01

    We study query construction algorithms, which aim at improving the generalization ability of systems that learn from examples by choosing optimal, nonredundant training sets. We set up a general probabilistic framework for deriving such algorithms from the requirement of optimizing a suitable objective function; specifically, we consider the objective functions entropy (or information gain) and generalization error. For two learning scenarios, the high-low game and the linear perceptron, we evaluate the generalization performance obtained by applying the corresponding query construction algorithms and compare it to training on random examples. We find qualitative differences between the two scenarios due to the different structure of the underlying rules (nonlinear and ``noninvertible'' versus linear); in particular, for the linear perceptron, random examples lead to the same generalization ability as a sequence of queries in the limit of an infinite number of examples. We also investigate learning algorithms which are ill matched to the learning environment and find that, in this case, minimum entropy queries can in fact yield a lower generalization ability than random examples. Finally, we study the efficiency of single queries and its dependence on the learning history, i.e., on whether the previous training examples were generated randomly or by querying, and the difference between globally and locally optimal query construction.

  16. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  17. Juvenile Scleroderma

    MedlinePlus

    ... morphea, linear scleroderma, and scleroderma en coup de sabre. Each type can be subdivided further and some ... described for morphea. Linear scleroderma en coup de sabre is the term generally applied when children have ...

  18. Determination of effective mechanical properties of a double-layer beam by means of a nano-electromechanical transducer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hocke, Fredrik; Pernpeintner, Matthias; Gross, Rudolf, E-mail: rudolf.gross@wmi.badw.de

    We investigate the mechanical properties of a doubly clamped, double-layer nanobeam embedded into an electromechanical system. The nanobeam consists of a highly pre-stressed silicon nitride and a superconducting niobium layer. By measuring the mechanical displacement spectral density both in the linear and the nonlinear Duffing regime, we determine the pre-stress and the effective Young's modulus of the nanobeam. An analytical double-layer model quantitatively corroborates the measured values. This suggests that this model can be used to design mechanical multilayer systems for electro- and optomechanical devices, including materials controllable by external parameters such as piezoelectric, magnetostrictive, or in more general multiferroicmore » materials.« less

  19. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  20. Generalized Clifford Algebras as Algebras in Suitable Symmetric Linear Gr-Categories

    NASA Astrophysics Data System (ADS)

    Cheng, Tao; Huang, Hua-Lin; Yang, Yuping

    2016-01-01

    By viewing Clifford algebras as algebras in some suitable symmetric Gr-categories, Albuquerque and Majid were able to give a new derivation of some well known results about Clifford algebras and to generalize them. Along the same line, Bulacu observed that Clifford algebras are weak Hopf algebras in the aforementioned categories and obtained other interesting properties. The aim of this paper is to study generalized Clifford algebras in a similar manner and extend the results of Albuquerque, Majid and Bulacu to the generalized setting. In particular, by taking full advantage of the gauge transformations in symmetric linear Gr-categories, we derive the decomposition theorem and provide categorical weak Hopf structures for generalized Clifford algebras in a conceptual and simpler manner.

  1. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  2. Roles of nonlocal conductivity on spin Hall angle measurement

    NASA Astrophysics Data System (ADS)

    Chen, Kai; Zhang, Shufeng

    2017-10-01

    Spin Hall angle characterizes the rate of spin-charge current conversion and it has become one of the most important material parameters for spintronics physics and device application. A long-standing controversy is that the spin Hall angles for a given material measured by spin pumping and by spin Hall torque experiments are inconsistent and they could differ by as much as an order of magnitude. By using the linear response spin transport theory, we explicitly formulate the relation between the spin Hall angle and measured variables in different experiments. We find that the nonlocal conductivity inherited in the layered structure plays a key role to resolve conflicting values of the spin Hall angle. We provide a generalized scheme for extracting spin transport coefficients from experimental data.

  3. The microwave propagation and backscattering characteristics of vegetation. [wheat, sorghum, soybeans and corn fields in Kansas

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Wilson, E. A.

    1984-01-01

    A semi-empirical model for microwave backscatter from vegetation was developed and a complete set of canope attenuation measurements as a function of frequency, incidence angle and polarization was acquired. The semi-empirical model was tested on corn and sorghum data over the 8 to 35 GHz range. The model generally provided an excellent fit to the data as measured by the correlation and rms error between observed and predicted data. The model also predicted reasonable values of canopy attenuation. The attenuation data was acquired over the 1.6 to 10.2 GHz range for the linear polarizations at approximately 20 deg and 50 deg incidence angles for wheat and soybeans. An attenuation model is proposed which provides reasonable agreement with the measured data.

  4. Pulmonary function of U.S. coal miners related to dust exposure estimates.

    PubMed

    Attfield, M D; Hodous, T K

    1992-03-01

    This study of 7,139 U.S. coal miners used linear regression analysis to relate estimates of cumulative dust exposure to several pulmonary function variables measured during medical examinations undertaken between 1969 and 1971. The exposure data included newly derived cumulative dust exposure estimates for the period up to time of examination based on large data bases of underground airborne dust sampling measurements. Negative associations were found between measures of cumulative exposure and FEV1, FVC, and the FEV1/FVC ratio (p less than 0.001). In general, the relationships were similar to those reported for British coal miners. Overall, the results demonstrate an adverse effect of coal mine dust exposure on pulmonary function that occurs even in the absence of radiographically detected pneumoconiosis.

  5. Ice Growth Measurements from Image Data to Support Ice Crystal and Mixed-Phase Accretion Testing

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Lynch, Christopher J.

    2012-01-01

    This paper describes the imaging techniques as well as the analysis methods used to measure the ice thickness and growth rate in support of ice-crystal icing tests performed at the National Research Council of Canada (NRC) Research Altitude Test Facility (RATFac). A detailed description of the camera setup, which involves both still and video cameras, as well as the analysis methods using the NASA Spotlight software, are presented. Two cases, one from two different test entries, showing significant ice growth are analyzed in detail describing the ice thickness and growth rate which is generally linear. Estimates of the bias uncertainty are presented for all measurements. Finally some of the challenges related to the imaging and analysis methods are discussed as well as methods used to overcome them.

  6. Ice Growth Measurements from Image Data to Support Ice-Crystal and Mixed-Phase Accretion Testing

    NASA Technical Reports Server (NTRS)

    Struk, Peter, M; Lynch, Christopher, J.

    2012-01-01

    This paper describes the imaging techniques as well as the analysis methods used to measure the ice thickness and growth rate in support of ice-crystal icing tests performed at the National Research Council of Canada (NRC) Research Altitude Test Facility (RATFac). A detailed description of the camera setup, which involves both still and video cameras, as well as the analysis methods using the NASA Spotlight software, are presented. Two cases, one from two different test entries, showing significant ice growth are analyzed in detail describing the ice thickness and growth rate which is generally linear. Estimates of the bias uncertainty are presented for all measurements. Finally some of the challenges related to the imaging and analysis methods are discussed as well as methods used to overcome them.

  7. Measurement and correlation of jet fuel viscosities at low temperatures

    NASA Technical Reports Server (NTRS)

    Schruben, D. L.

    1985-01-01

    Apparatus and procedures were developed to measure jet fuel viscosity for eight current and future jet fuels at temperatures from ambient to near -60 C by shear viscometry. Viscosity data showed good reproducibility even at temperatures a few degrees below the measured freezing point. The viscosity-temperature relationship could be correlated by two linear segments when plotted as a standard log-log type representation (ASTM D 341). At high temperatures, the viscosity-temperature slope is low. At low temperatures, where wax precipitation is significant, the slope is higher. The breakpoint between temperature regions is the filter flow temperature, a fuel characteristic approximated by the freezing point. A generalization of the representation for the eight experimental fuels provided a predictive correlation for low-temperature viscosity, considered sufficiently accurate for many design or performance calculations.

  8. Continuous functional magnetic resonance imaging reveals dynamic nonlinearities of "dose-response" curves for finger opposition.

    PubMed

    Berns, G S; Song, A W; Mao, H

    1999-07-15

    Linear experimental designs have dominated the field of functional neuroimaging, but although successful at mapping regions of relative brain activation, the technique assumes that both cognition and brain activation are linear processes. To test these assumptions, we performed a continuous functional magnetic resonance imaging (MRI) experiment of finger opposition. Subjects performed a visually paced bimanual finger-tapping task. The frequency of finger tapping was continuously varied between 1 and 5 Hz, without any rest blocks. After continuous acquisition of fMRI images, the task-related brain regions were identified with independent components analysis (ICA). When the time courses of the task-related components were plotted against tapping frequency, nonlinear "dose- response" curves were obtained for most subjects. Nonlinearities appeared in both the static and dynamic sense, with hysteresis being prominent in several subjects. The ICA decomposition also demonstrated the spatial dynamics with different components active at different times. These results suggest that the brain response to tapping frequency does not scale linearly, and that it is history-dependent even after accounting for the hemodynamic response function. This implies that finger tapping, as measured with fMRI, is a nonstationary process. When analyzed with a conventional general linear model, a strong correlation to tapping frequency was identified, but the spatiotemporal dynamics were not apparent.

  9. Extensions of the Ferry shear wave model for active linear and nonlinear microrheology

    PubMed Central

    Mitran, Sorin M.; Forest, M. Gregory; Yao, Lingxing; Lindley, Brandon; Hill, David B.

    2009-01-01

    The classical oscillatory shear wave model of Ferry et al. [J. Polym. Sci. 2:593-611, (1947)] is extended for active linear and nonlinear microrheology. In the Ferry protocol, oscillation and attenuation lengths of the shear wave measured from strobe photographs determine storage and loss moduli at each frequency of plate oscillation. The microliter volumes typical in biology require modifications of experimental method and theory. Microbead tracking replaces strobe photographs. Reflection from the top boundary yields counterpropagating modes which are modeled here for linear and nonlinear viscoelastic constitutive laws. Furthermore, bulk imposed strain is easily controlled, and we explore the onset of normal stress generation and shear thinning using nonlinear viscoelastic models. For this paper, we present the theory, exact linear and nonlinear solutions where possible, and simulation tools more generally. We then illustrate errors in inverse characterization by application of the Ferry formulas, due to both suppression of wave reflection and nonlinearity, even if there were no experimental error. This shear wave method presents an active and nonlinear analog of the two-point microrheology of Crocker et al. [Phys. Rev. Lett. 85: 888 - 891 (2000)]. Nonlocal (spatially extended) deformations and stresses are propagated through a small volume sample, on wavelengths long relative to bead size. The setup is ideal for exploration of nonlinear threshold behavior. PMID:20011614

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grenon, Cedric; Lake, Kayll

    We generalize the Swiss-cheese cosmologies so as to include nonzero linear momenta of the associated boundary surfaces. The evolution of mass scales in these generalized cosmologies is studied for a variety of models for the background without having to specify any details within the local inhomogeneities. We find that the final effective gravitational mass and size of the evolving inhomogeneities depends on their linear momenta but these properties are essentially unaffected by the details of the background model.

  11. High growth rate homoepitaxial diamond film deposition at high temperatures by microwave plasma-assisted chemical vapor deposition

    NASA Technical Reports Server (NTRS)

    Vohra, Yogesh K. (Inventor); McCauley, Thomas S. (Inventor)

    1997-01-01

    The deposition of high quality diamond films at high linear growth rates and substrate temperatures for microwave-plasma chemical vapor deposition is disclosed. The linear growth rate achieved for this process is generally greater than 50 .mu.m/hr for high quality films, as compared to rates of less than 5 .mu.m/hr generally reported for MPCVD processes.

  12. Computer-aided linear-circuit design.

    NASA Technical Reports Server (NTRS)

    Penfield, P.

    1971-01-01

    Usually computer-aided design (CAD) refers to programs that analyze circuits conceived by the circuit designer. Among the services such programs should perform are direct network synthesis, analysis, optimization of network parameters, formatting, storage of miscellaneous data, and related calculations. The program should be embedded in a general-purpose conversational language such as BASIC, JOSS, or APL. Such a program is MARTHA, a general-purpose linear-circuit analyzer embedded in APL.

  13. Covariant electrodynamics in linear media: Optical metric

    NASA Astrophysics Data System (ADS)

    Thompson, Robert T.

    2018-03-01

    While the postulate of covariance of Maxwell's equations for all inertial observers led Einstein to special relativity, it was the further demand of general covariance—form invariance under general coordinate transformations, including between accelerating frames—that led to general relativity. Several lines of inquiry over the past two decades, notably the development of metamaterial-based transformation optics, has spurred a greater interest in the role of geometry and space-time covariance for electrodynamics in ponderable media. I develop a generally covariant, coordinate-free framework for electrodynamics in general dielectric media residing in curved background space-times. In particular, I derive a relation for the spatial medium parameters measured by an arbitrary timelike observer. In terms of those medium parameters I derive an explicit expression for the pseudo-Finslerian optical metric of birefringent media and show how it reduces to a pseudo-Riemannian optical metric for nonbirefringent media. This formulation provides a basis for a unified approach to ray and congruence tracing through media in curved space-times that may smoothly vary among positively refracting, negatively refracting, and vacuum.

  14. ALPS: A Linear Program Solver

    NASA Technical Reports Server (NTRS)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  15. Generalized Kramers-Kronig relations in nonlinear optical- and THz-spectroscopy

    NASA Astrophysics Data System (ADS)

    Peiponen, K.-E.; Saarinen, J. J.

    2009-05-01

    Kramers-Kronig (K-K) relations have constituted one of the principal tools in the optical spectroscopy for the assessment of the optical properties of media from measured spectra. The underlying principle for the existence of the K-K relations is causality. Thanks to the K-K relations we have achieved a better understanding of both macroscopic and microscopic properties of media. Recently, various kinds of modified K-K relations have been presented in the literature. Such relations have been applied, e.g. to the nonlinear optical properties of polymers. A typical advantage of these generalized K-K relations is that the measured data do not need to be manipulated as in the case of the traditional K-K relations. Hence, the accuracy of the inverted data on linear or nonlinear optical properties of media becomes higher. A novel way to utilize generalized K-K relations is related to the measurement and correction of terahertz spectra in the time-domain reflection spectroscopy. Terahertz spectroscopy is nowadays one of the most rapidly developing fields in modern physics with applications being related to, e.g. security at the airports or inspection of pharmaceutical tablets. While recording THz spectra it is also possible to perform a chemical mapping of species. Therefore, correctness of the spectrum is of crucial importance for the identification of different species. This is possible by the generalized K-K relations. In this review paper we consider advances of K-K relations both in nonlinear optical and THz spectroscopy.

  16. GVE-Based Dynamics and Control for Formation Flying Spacecraft

    NASA Technical Reports Server (NTRS)

    Breger, Louis; How, Jonathan P.

    2004-01-01

    Formation flying is an enabling technology for many future space missions. This paper presents extensions to the equations of relative motion expressed in Keplerian orbital elements, including new initialization techniques for general formation configurations. A new linear time-varying form of the equations of relative motion is developed from Gauss Variational Equations and used in a model predictive controller. The linearizing assumptions for these equations are shown to be consistent with typical formation flying scenarios. Several linear, convex initialization techniques are presented, as well as a general, decentralized method for coordinating a tetrahedral formation using differential orbital elements. Control methods are validated using a commercial numerical propagator.

  17. A generalized interval fuzzy mixed integer programming model for a multimodal transportation problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Tian, Wenli; Cao, Chengxuan

    2017-03-01

    A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.

  18. Computer-Aided Design of Low-Noise Microwave Circuits

    NASA Astrophysics Data System (ADS)

    Wedge, Scott William

    1991-02-01

    Devoid of most natural and manmade noise, microwave frequencies have detection sensitivities limited by internally generated receiver noise. Low-noise amplifiers are therefore critical components in radio astronomical antennas, communications links, radar systems, and even home satellite dishes. A general technique to accurately predict the noise performance of microwave circuits has been lacking. Current noise analysis methods have been limited to specific circuit topologies or neglect correlation, a strong effect in microwave devices. Presented here are generalized methods, developed for computer-aided design implementation, for the analysis of linear noisy microwave circuits comprised of arbitrarily interconnected components. Included are descriptions of efficient algorithms for the simultaneous analysis of noisy and deterministic circuit parameters based on a wave variable approach. The methods are therefore particularly suited to microwave and millimeter-wave circuits. Noise contributions from lossy passive components and active components with electronic noise are considered. Also presented is a new technique for the measurement of device noise characteristics that offers several advantages over current measurement methods.

  19. Response time scores on a reflexive attention task predict a child's inattention score from a parent report.

    PubMed

    Lundwall, Rebecca A; Sgro, Jordan F; Fanger, Julia

    2018-01-01

    Compared to sustained attention, only a small proportion of studies examine reflexive attention as a component of everyday attention. Understanding the significance of reflexive attention to everyday attention may inform better treatments for attentional disorders. Children from a general population (recruited when they were from 9-16 years old) completed an exogenously-cued task measuring the extent to which attention is captured by peripheral cue-target conditions. Parents completed a questionnaire reporting their child's day-to-day attention. A general linear model indicated that parent-rated inattention predicted the increase in response time over baseline when a bright cue preceded the target (whether it was valid or invalid) but not when a dim cue preceded the target. More attentive children had more pronounced response time increases from baseline. Our findings suggest a link between a basic measure of cognition (response time difference scores) and parent observations. The findings have implications for increased understanding of the role of reflexive attention in the everyday attention of children.

  20. Multidimensional entropic uncertainty relation based on a commutator matrix in position and momentum spaces

    NASA Astrophysics Data System (ADS)

    Hertz, Anaelle; Vanbever, Luc; Cerf, Nicolas J.

    2018-01-01

    The uncertainty relation for continuous variables due to Byałinicki-Birula and Mycielski [I. Białynicki-Birula and J. Mycielski, Commun. Math. Phys. 44, 129 (1975), 10.1007/BF01608825] expresses the complementarity between two n -tuples of canonically conjugate variables (x1,x2,...,xn) and (p1,p2,...,pn) in terms of Shannon differential entropy. Here we consider the generalization to variables that are not canonically conjugate and derive an entropic uncertainty relation expressing the balance between any two n -variable Gaussian projective measurements. The bound on entropies is expressed in terms of the determinant of a matrix of commutators between the measured variables. This uncertainty relation also captures the complementarity between any two incompatible linear canonical transforms, the bound being written in terms of the corresponding symplectic matrices in phase space. Finally, we extend this uncertainty relation to Rényi entropies and also prove a covariance-based uncertainty relation which generalizes the Robertson relation.

Top